I'm trying to write a simple script to print the first 5 lines of a webpage's source code, and then the request time it took for the page to load. Currently, I've been trying the following:
#!/bin/bash
# Enter a website, get data
output=$(curl $1 -s -w "%{time_connect}\n" | (head -n5; tail -n1))
echo "$output"
However, on some pages, the "tail" doesn't actually print, which should be the time to request, and I'm not sure why.
I've found that I can also use a while loop to iterate through lines and print the whole thing, but is there a way for me to just echo the first few lines of a variable and then the last line of that same variable, so I can precede the request time with a heading (ex: Request time: 0.489)?
I'd like to be able to format it as:
echo "HTML: $output\n"
echo "Request Time: $requestTime"
Thank you! Sorry if this seems very simple, I am really new to this language :). The main problem for me is getting this data all from the same request- doing two separate curl requests would be very simple.
head may read more than 5 lines of input in order to identify what it needs to output. This means the lines you intended to pass to tail may have already been consumed. It's safer to use a single process (awk, in this case) to handle all the output.
output=$(curl "$1" -s -w "%{time_connect}\n" | awk 'NR<=5 {print} END {print})
The carriage returns threw me. Try this:
echo "HTML: "
retn=$'\r'
i=0
while read item
do
item="${item/$retn/}" # Strip out the carriage-return
if (( i < 5 )); then # Only display the first 5 lines
echo "$item"
fi
(( i++ ))
requestTime="$item" # Grab the last line
done < <(curl $1 -s -w "%{time_connect}\n")
requestTime="${requestTime##*\>}" # Sanitise the last line
echo "Request Time: $requestTime"
Related
I am using a curl command to get json data from an application called "Jira".
Stupidly (in my view), you cannot use the api to return more than 50 values at a time.
The only choice is to do it in multiple commands and they call this "pagination". It is not possible to get more than 50 results, no matter the command.
This is the command here:
curl -i -X GET 'https://account_name.atlassian.net/rest/api/3/project/search?jql=ORDER%20BY%20Created&maxResults=50&startAt=100' --user 'scouse_bob#mycompany.com:<sec_token_deets>'
This is the key piece of what I am trying to work into a loop to avoid having to do this manually each time:
startAt=100
My goal is to "somehow" have this loop in blocks of fifty, so, startAt=50 then startAt=100, startAt=150 etc and append the entire output to a file until the figure 650 is reached and / or there is no further output available.
I have played around with a command like this:
#!/bin/ksh
i=1
while [[ $i -lt 1000 ]] ; do
curl -i -X GET 'https://account_name.atlassian.net/rest/api/3/project/search?jql=ORDER%20BY%20Created&maxResults=50&startAt=100' --user 'scouse_bob#mycompany.com:<sec_token_deets>'
echo "$i"
(( i += 1 ))
done
Which does not really get me far as although it will loop, I am uncertain as to how to apply the variable.
Help appreciated.
My goal is to "somehow" have this loop in blocks of fifty, so, startAt=50 then startAt=100, startAt=150 etc and append the entire output to a file until the figure 650 is reached and / or there is no further output available.
The former is easy:
i=0
while [[ $i -lt 650 ]]; do
# if you meant until 650 inclusive, change to -le 650 or -lt 700
curl "https://host/path?blah&startAt=$i"
# pipe to/through some processing if desired
# note URL is in " so $i is expanded but
# other special chars like & don't screw up parsing
# also -X GET is the default (without -d or similar) and can be omitted
(( i+=50 ))
done
The latter depends on just what 'no further output available' looks like. I'd expect you probably don't get an HTTP error, but either a contenttype indicating error or a JSON containing either an end or error indication or a no-data indication. How to recognize this depends on what you get, and I don't know this API. I'll guess you probably want something more or less like:
curl ... >tmpfile
if jq -e '.eof==true' tmpfile; then break; else cat/whatever tmpfile; fi
# or
if jq -e '.data|length==0' tmpfile; then break; else cat/whatever tmpfile; fi
where tmpfile is some suitable filename that won't conflict with your other files; the most general way is to use $(mktemp) (saved in a variable). Or instead of a file put the data in a variable var=$(curl ...) and then use <<<$var as input to anything that reads stdin.
EDIT: I meant to make this CW to make it easier for anyone to add/fix the API specifics, but forgot; instead I encourage anyone who knows to edit.
You may want to stop when you get partial output i.e. if you ask for 50 and get 37, it may mean there is no more after those 37 and you don't need to try the next batch. Again this depends on the API which I don't know.
I would like to test a file for a string and an array of strings.
My problem is the order of things.
If there is a line "Alice was shot by Bob" my code calls both players dead even if Bob is still alive. So I only want to test for "${player} ${deaths}" and ignore any "${player}" after ${deaths}.
An example line from the log file:
18:45:23 [Server/Thread][INFO] Alice was shot by Bob using weapon
The code should recognise "Alice" and "was shot by" but not "Bob" because "Bob" is after the death message. If there is only a death message or a player name it should do nothing which it currently does. It should also ignore the "using weapon" and the "server stuff" before Alice.
This is what I got so far:
#!/bin/bash
# testing for death messages and performing separate actions for each player
screenlog="screen.log" # this is a growing logfile
tmpscreenlog="tmpscreen.log" # this is always the last line of screenlog
player01="Alice" # the first player
player02="Bob" # the second player
deaths=( # an array of possible death messages
"was shot by"
"burned to death"
"starved to death"
"drowned"
)
while true; do
tail -n1 ${screenlog} >> ${tmpscreenlog} # this line creates a one line buffer from the growing screenlog file
if [[ ! -z $(grep "${player01}\|${deaths[*]}" "${tmpscreenlog}") ]]; then # if Alice and any death occurs in tmpscreen.log
echo "Alice is dead!" # output Alices death and perform some commands
screen -Rd sessionname -X stuff "ban ${player01} You died! Thank you for participating.$(printf '\r')"
# commands for Alice
fi
if [[ ! -z $(grep "${player02}\|${deaths[*]}" "${tmpscreenlog}") ]]; then # if Bob and any death occurs in tmpscreen.log
echo "Bob is dead!" # output Bobs death and perform some commands
screen -Rd sessionname -X stuff "ban ${player02} You died! Thank you for participating.$(printf '\r')"
# commands for Bob
fi
rm ${tmpscreenlog} # this line removes the one line screenlog buffer
sleep 1s
done
Thank you for any suggestions and help <3
tmpscreenlog="tmpscreen.log" # this is always the last line of screenlog
Hey, this is not always true... What if two (or more) messages appears in the last second?
It is better to use shell pipes to handle such things. You could use something like
tail -f screen.log | awk '/^[^ ]+ was (shot|slain|killed|blown up) by/ { print $1 " is dead" }'
Thanks tripleee for simplifying
I would use the following pipeline to replace your whole script :
tail -n0 -f screen.log | sed -nE 's/.* ([A-Za-z]+) was (shot|slain|killed|blown up) by.*/\1 is dead/p'
The sed command will match lines that conform to your format, capturing the name of the dead player in the first capturing group, and replace those lines by your desired death message.
tail's -f option "follows" the file, outputting content as it is added to the log file and removing the need for a while loop.
I'm using -n0 to avoid matching lines that were present before your executed the command. If that's not a desired feature just remove it, it'll match by default from the 10 last lines of the file.
You can try it here.
If you're using GNU grep you could also rely on a lookahead to extract the killed player name alone :
grep -Po '\w+(?= was (shot|slain|killed|blown up) by)'
Newbie Alert!
I am trying to run a DNS record query for domain records in CSV via bash script. I want to find the MX records with host -t mx example.com and then record/output the result an another CSV.
Stuck at the stage to get the script to run the host -t mx$domain command because when running host -t mx**(space must be here)**example.com;
What I have:
#!/bin/bash
while IFS=, read -r domain
do
#echo ${domain/./\.}\
host -t mx${domain/./\.}
done < test1.csv
Thanks
Edit 1; Adding Sample Input and Output CSV
Input CSV
domain
24i.co.ke,
28feb.co.ke,
4thestatewire.co.ke,
aakenya.co.ke,
Expected Output
domain,mx
24i.co.ke,"24i.co.ke mail is handled by 20 alt2.aspmx.l.google.com.
24i.co.ke mail is handled by 30 aspmx3.googlemail.com.
24i.co.ke mail is handled by 10 aspmx.l.google.com."
28feb.co.ke,"28feb.co.ke mail is handled by 30 aspmx3.googlemail.com.
28feb.co.ke mail is handled by 30 aspmx5.googlemail.com.
28feb.co.ke mail is handled by 30 aspmx2.googlemail.com.
28feb.co.ke mail is handled by 10 aspmx.l.google.com.
28feb.co.ke mail is handled by 20 alt1.aspmx.l.google.com.
28feb.co.ke mail is handled by 20 alt2.aspmx.l.google.com.
28feb.co.ke mail is handled by 30 aspmx4.googlemail.com."
4thestatewire.co.ke,Host 4thestatewire.co.ke not found: 3(NXDOMAIN)
aakenya.co.ke,"aakenya.co.ke mail is handled by 20 ukns1.accesskenya.com.
aakenya.co.ke mail is handled by 10 smtpin.accesskenya.com."
abacus.co.ke,
Your substitution, ${domain/./\.} is probably not doing what you expect (though the result may be harmless). I can see that you've tried some debugging with an echo line. It would be interesting to know what you thought this substitution would achieve.
Your input file is CSV with two fields, the second one empty. I can't see anything that you would need to translate or change in that first field to make it compatible with a DNS lookup.
#!/usr/bin/env bash
file="${1:-test1.csv}"
if [[ ! -f "$file" ]]; then
printf 'No file: %s\n' "$file" >&2
exit 1
fi
(
read -r header; printf '%s\n' "$header"
while IFS=, read -r domain; do
line="$(host -t mx "$domain" | sort | head -1)"
printf '%s,"%s"\n' "$domain" "$line"
done
) < "$file"
So...
This takes an input file as an optional argument. If the input file (or test1.csv if none is provided) does not exist, the script exits.
It takes the MX records for the domain, sorts them, then selects the first one. By doing this, we put the lowest-numbered (highest priority) MX.
The while loop is in parentheses so that the header can be read from the same input stream as the loop. Note that parentheses denote a subshell, so variables set inside them will not be visible to the parts of the script outside the parentheses.
And finally, this actually prints some output, which your sample script did not. :-)
Give a try to this oneliner:
awk -F, 'FNR>1{ print $1 }' < input.txt \
| xargs -n 1 sh -c 'v="$(host -t mx $1)"; echo "$1,\"$v\""' argv0
Based on your input file, is reading after line 1, and using the first field (the domain) without the ,. The output is piped to xargs in where execute the command and store the value in a variable so that later can print it on your desired format domain,"output"
The only thing pending would be to add the first line "domain,mx" to the output.
I am currently testing a simple dictionary attack using bash scripts. I have encoded my password "Snake" with sha256sum by simply typing the following command:
echo -n Snake | sha256sum
This produced the following:
aaa73ac7721342eac5212f15feb2d5f7631e28222d8b79ffa835def1b81ff620 *-
I then copy pasted the hashed string into the program, but the script is not doing what is intended to do. The script is (Note that I have created a test dictionary text file which only contains 6 lines):
echo "Enter:"
read value
cat dict.txt | while read line1
do
atax=$(echo -n "$line1" | sha256sum)
if [[ "$atax" == "$value" ]];
then
echo "Cracked: $line1"
exit 1
fi
echo "Trying: $line1"
done
Result:
Trying: Dog
Trying: Cat
Trying: Rabbit
Trying: Hamster
Trying: Goldfish
Trying: Snake
The code should display "Cracked: Snake" and terminate, when it compares the hashed string with the word "Snake". Where am I going wrong?
EDIT: The bug was indeed the DOS lines in my textfile. I made a unix file and the checksums matched. Thanks everyone.
One problem is that, as pakistanprogrammerclub points out, you're never initializing name (as opposed to line1).
Another problem is that sha256sum does not just print out the checksum, but also *- (meaning "I read the file from standard input in binary mode").
I'm not sure if there's a clean way to get just the checksum — probably there is, but I can't find it — but you can at least write something like this:
atax=$(echo -n "$name" | sha256sum | sed 's/ .*//')
(using sed to strip off everything from the space onwards).
couple issues - the variable name is not set anywhere - do you mean value? Also better form to use redirection instead of cat
while read ...; do ... done <dict.txt
Variables set by a while loop in a pipeline are not available in the parent shell not the other way around as I mistakenly said before - it's not an issue here though
Could be a cut n paste error - add an echo after the first read
echo "value \"$value\""
also after atax is set
echo "line1 \"$line1\" atax \"$atax\""
I was given this text file, call stock.txt, the content of the text file is:
pepsi;drinks;3
fries;snacks;6
apple;fruits;9
baron;drinks;7
orange;fruits;2
chips;snacks;8
I will need to use bash-script to come up this output:
Total amount for drinks: 10
Total amount for snacks: 14
Total amount for fruits: 11
Total of everything: 35
My gut tells me I will need to use sed, group, grep and something else.
Where should I start?
I would break the exercise down into steps
Step 1: Read the file one line at a time
while read -r line
do
# do something with $line
done
Step 2: Pattern match (drinks, snacks, fruits) and do some simple arithmetic. This step requires that you tokenized each line which I'll leave an exercise for you to figure out.
if [[ "$line" =~ "drinks" ]]
then
echo "matched drinks"
.
.
.
fi
Pure Bash. A nice application for an associative array:
declare -A category # associative array
IFS=';'
while read name cate price ; do
((category[$cate]+=price))
done < stock.txt
sum=0
for cate in ${!category[#]}; do # loop over the indices
printf "Total amount of %s: %d\n" $cate ${category[$cate]}
((sum+=${category[$cate]}))
done
printf "Total amount of everything: %d\n" $sum
There is a short description here about processing comma separated files in bash here:
http://www.cyberciti.biz/faq/unix-linux-bash-read-comma-separated-cvsfile/
You could do something similar. Just change IFS from comma to semicolon.
Oh yeah, and a general hint for learning bash: man is your friend. Use this command to see manual pages for all (or most) of commands and utilities.
Example: man read shows the manual page for read command. On most systems it will be opened in less, so you should exit the manual by pressing q (may be funny, but it took me a while to figure that out)
The easy way to do this is using a hash table, which is supported directly by bash 4.x and of course can be found in awk and perl. If you don't have a hash table then you need to loop twice: once to collect the unique values of the second column, once to total.
There are many ways to do this. Here's a fun one which doesn't use awk, sed or perl. The only external utilities I've used here are cut, sort and uniq. You could even replace cut with a little more effort. In fact lines 5-9 could have been written more easily with grep, (grep $kind stock.txt) but I avoided that to show off the power of bash.
for kind in $(cut -d\; -f 2 stock.txt | sort | uniq) ; do
total=0
while read d ; do
total=$(( total+d ))
done < <(
while read line ; do
[[ $line =~ $kind ]] && echo $line
done < stock.txt | cut -d\; -f3
)
echo "Total amount for $kind: $total"
done
We lose the strict ordering of your original output here. An exercise for you might be to find a way not to do that.
Discussion:
The first line describes a sub-shell with a simple pipeline using cut. We read the third field from the stock.txt file, with fields delineated by ;, written \; here so the shell does not interpret it. The result is a newline-separated list of values from stock.txt. This is piped to sort, then uniq. This performs our "grouping" step, since the pipeline will output an alphabetic list of items from the second column but will only list each item once no matter how many times it appeared in the input file.
Also on the first line is a typical for loop: For each item resulting from the sub-shell we loop once, storing the value of the item in the variable kind. This is the other half of the grouping step, making sure that each "Total" output line occurs once.
On the second line total is initialized to zero so that it always resets whenever a new group is started.
The third line begins the 'totaling' loop, in which for the current kind we find the sum of its occurrences. here we declare that we will read the variable d in from stdin on each iteration of the loop.
On the fourth line the totaling actually occurs: Using shell arithmatic we add the value in d to the value in total.
Line five ends the while loop and then describes its input. We use shell input redirection via < to specify that the input to the loop, and thus to the read command, comes from a file. We then use process substitution to specify that the file will actually be the results of a command.
On the sixth line the command that will feed the while-read loop begins. It is itself another while-read loop, this time reading into the variable line. On the seventh line the test is performed via a conditional construct. Here we use [[ for its =~ operator, which is a pattern matching operator. We are testing to see whether $line matches our current $kind.
On the eighth line we end the inner while-read loop and specify that its input comes from the stock.txt file, then we pipe the output of the entire loop, which by now is simply all lines matching $kind, to cut and instruct it to show only the third field, which is the numeric field. On line nine we then end the process substitution command, the output of which is a newline-delineated list of numbers from lines which were of the group specified by kind.
Given that the total is now known and the kind is known it is a simple matter to print the results to the screen.
The below answer is OP's. As it was edited in the question itself and OP hasn't come back for 6 years, I am editing out the answer from the question and posting it as wiki here.
My answer, to get the total price, I use this:
...
PRICE=0
IFS=";" # new field separator, the end of line
while read name cate price
do
let PRICE=PRICE+$price
done < stock.txt
echo $PRICE
When I echo, its :35, which is correct. Now I will moving on using awk to get the sub-category result.
Whole Solution:
Thanks guys, I manage to do it myself. Here is my code:
#!/bin/bash
INPUT=stock.txt
PRICE=0
DRINKS=0
SNACKS=0
FRUITS=0
old_IFS=$IFS # save the field separator
IFS=";" # new field separator, the end of line
while read name cate price
do
if [ $cate = "drinks" ]; then
let DRINKS=DRINKS+$price
fi
if [ $cate = "snacks" ]; then
let SNACKS=SNACKS+$price
fi
if [ $cate = "fruits" ]; then
let FRUITS=FRUITS+$price
fi
# Total
let PRICE=PRICE+$price
done < $INPUT
echo -e "Drinks: " $DRINKS
echo -e "Snacks: " $SNACKS
echo -e "Fruits: " $FRUITS
echo -e "Price " $PRICE
IFS=$old_IFS