How to poll the asda website more frequently (bash) - bash

I make a script what will check the asda home delivery slots from the api
Here it is, I call it get_slots.sh
You have to start tor or if you don't then you have to get rid of the line about sock5 hostname (you can see tor port number in command line with ps) but if you don't use tor they might cancel your account if they get narky about you polling their website
obviously u have to change the vars at the top
Query parameters and api url was kind of found out with inspector in chrome while using their normal java script thingy for joe public, top secret NOT
#!/bin/bash
my_postcode="SW1A1AA" # CHANGEME
account_id=18972357834 # JUST INVENT A NUMBER
order_id=22985263473 # LIKEWISE
ua='user_agent_I_want_to_fake'
my_tor_port=9150
#----------------
#ftype="POPUP"
#ftype="INSTORE_PICKUP"
ftype="DELIVERY"
format="%Y-%m-%dT00:00:00+01:00"
start_date=$(date "+$format")
end_date=$(date -d "+16 days" "+$format")
read -r -d '' data <<EOF
{
"data": {
"customer_info": {
"account_id": "$account_id"
},
"end_date": "$end_date",
"order_info": {
"line_item_count": 0,
"order_id": "$order_id",
"restricted_item_types": [],
"sub_total_amount": 0,
"total_quantity": 0,
"volume": 0,
"weight": 0
},
"reserved_slot_id": "",
"service_address": {"postcode":"$my_postcode"},
"service_info": {
"enable_express": false,
"fulfillment_type": "$ftype"
},
"start_date": "$start_date"
},
"requestorigin": "gi"
}
EOF
data=$(echo $data | tr -d ' ')
url='https://groceries.asda.com/api/v3/slot/view'
referer='https://groceries.asda.com/checkout/book-slot?origin=/account/orders'
curl -s \
--socks5-hostname localhost:$my_tor_port \
-H "Content-type: application/json; charset=utf-8" \
-H "Referer: $referer" \
-A "$ua" \
-d "$data" \
$url \
| python -m json.tool
anyway now i make another script to keep running it and mail me if any slots are available,
more vars u need 2 change at the top of this one
#!/bin/sh
me="my#email.address"
my_smtp_server="smtp.myisp.net:25"
#------------------------------------
mailed=0
ftmp=/tmp/slots.$$
while true
do
date
f=slots/`date +%Y%m%d/%H/%Y-%m-%d_%H%M%S`.json
d=`dirname $f`
[ -d $d ] || mkdir -p $d
./get_slots.sh > $f
if egrep -B1 'status.*"AVAILABLE"' $f > $ftmp
then
echo "found"
if [ $mailed -eq 0 ]
then
dates=`perl -nle '/start_time.*2020-(..-..T..):/ && print $1' $ftmp`
mailx \
-r "$me" -s "asda on $dates lol" \
-S smtp="$my_smtp_server" "$me" < $ftmp
echo "mailed"
mailed=1
fi
fi
sleep 120
done
so i kind of naughty here cos i need the timestamp for slots with status available to put in the email ... and i really cba to parse the json properly so i just rely on its in the line before the status
like the pretty printed json puts the stuff in alphfabetical order and comes out with something like
"slot_info": {
STUFF
"slot_type": null,
"start_time": "2020-06-10T19:00:00Z",
"status": "AVAILABLE",
"total_discount": 0.0,
"total_premium": 0.0,
MORE STUFF
so yeah all i do is egrep -B1
oh yeah i also naughty hard coded 2020 not do proper regex for the year, cos if this is all still going on after 2020 i might as well just starve anyway so dont want to over engineer it
anyway as you can see once it already mailed me it still keeps running cos i want to store the json files and may be analise them laters , it just dont mail me again after that unless i re start it
anyway my question is my script only check every two minutes and i want it to check more often so i can beat people.

okay sorted it the sleep 120 is 2 minutes i thought it was 1.2 minutes sorry forgot a minute is 60 seconds not 100 lol
oh yeah dont worry im not going to do this every 5 seconds like....!
just now i know the sleep is working properly i can change it to 60, still no more often than a lot of the people sat there re loading it manually believe me ......

Related

Elasticsearch read_only_allow_delete auto setting

I have problem with Elasticsearch. I tried the following:
$ curl -XPUT -H "Content-Type: application/json" \
http://localhost:9200/_all/_settings \
-d '{"index.blocks.read_only_allow_delete": false}'
My settings:
"settings": {
"index": {
"number_of_shards": "5",
"blocks": {
"read_only_allow_delete": "true"
},
"provided_name": "new-index",
"creation_date": "1515433832692",
"analysis": {
"filter": {
"ngram_filter": {
"type": "ngram",
"min_gram": "2",
"max_gram": "4"
}
},
"analyzer": {
"ngram_analyzer": {
"filter": [
"ngram_filter"
],
"type": "custom",
"tokenizer": "standard"
}
}
},
"number_of_replicas": "1",
"uuid": "OSG7CNAWR9-G3QC75K4oQQ",
"version": {
"created": "6010199"
}
}
}
When I check settings it looks fine, but only a few seconds (3-5) and it's still set to true. I can't add new elements and query anything, only _search and delete.
Someone have any idea how to resolve this?
NOTE: I'm using Elasticsearch version: 6.1.1
Elasticsearch automatically sets "read_only_allow_delete": "true" when hard disk space is low.
Find the files which are filling up your storage and delete/move them. Once you have sufficient storage available run the following command through the Dev Tool in Kibana:
PUT your_index_name/_settings
{
"index": {
"blocks": {
"read_only_allow_delete": "false"
}
}
}
OR (through the terminal):
$ curl -XPUT -H "Content-Type: application/json" \
http://localhost:9200/_all/_settings \
-d '{"index.blocks.read_only_allow_delete": false}'
as mentioned in your question.
In an attempt to add a sprinkling of value to the accepted answer (and because i'll google this and come back in future), for my case the read_only_allow_delete flag was set because of the default settings for disk watermark being percentage based - which on my large disk did not make as much sense. So I changed these settings to be "size remaining" based as the documentation explains.
So before setting read_only_allow_delete back to false, I first set the watermark values based on disk space:
(using Kibana UI):
PUT _cluster/settings
{
"transient": {
"cluster.routing.allocation.disk.watermark.low": "20gb",
"cluster.routing.allocation.disk.watermark.high": "15gb",
"cluster.routing.allocation.disk.watermark.flood_stage": "10gb"
}
}
PUT your_index_name/_settings
{
"index": {
"blocks": {
"read_only_allow_delete": "false"
}
}
}
OR (through the terminal):
$ curl -XPUT -H "Content-Type: application/json" \
http://localhost:9200/_cluster/_settings \
-d '{"cluster.routing.allocation.disk.watermark.low": "20gb",
"cluster.routing.allocation.disk.watermark.high": "15gb",
"cluster.routing.allocation.disk.watermark.flood_stage": "10gb"}'
$ curl -XPUT -H "Content-Type: application/json" \
http://localhost:9200/_all/_settings \
-d '{"index.blocks.read_only_allow_delete": false}'
Background
We maintain a cluster where we have filebeat, metricbeat, packetbeat, etc. shippers pushing data into the cluster. Invariably some index would become hot and we'd want to either disable writing to it for a time or do clean up and reenable indices which had breached their low watermark thresholds and had automatically gone into read_only_allow_delete: true.
Bash Functions
To ease the management of our clusters for the rest of my team I wrote the following Bash functions to help perform these tasks without having to fumble around with curl or through Kibana's UI.
$ cat es_funcs.bash
### es wrapper cmd inventory
declare -A escmd
escmd[l]="./esl"
escmd[p]="./esp"
### es data node naming conventions
nodeBaseName="rdu-es-data-0"
declare -A esnode
esnode[l]="lab-${nodeBaseName}"
esnode[p]="${nodeBaseName}"
usage_chk1 () {
# usage msg for cmds w/ 1 arg
local env="$1"
[[ $env =~ [lp] ]] && return 0 || \
printf "\nUSAGE: ${FUNCNAME[1]} [l|p]\n\n" && return 1
}
enable_readonly_idxs () {
# set read_only_allow_delete flag
local env="$1"
usage_chk1 "$env" || return 1
DISALLOWDEL=$(cat <<-EOM
{
"index": {
"blocks": {
"read_only_allow_delete": "true"
}
}
}
EOM
)
${escmd[$env]} PUT '_all/_settings' -d "$DISALLOWDEL"
}
disable_readonly_idxs () {
# clear read_only_allow_delete flag
local env="$1"
usage_chk1 "$env" || return 1
ALLOWDEL=$(cat <<-EOM
{
"index": {
"blocks": {
"read_only_allow_delete": "false"
}
}
}
EOM
)
${escmd[$env]} PUT '_all/_settings' -d "$ALLOWDEL"
}
Example Run
The above functions can be sourced in your shell like so:
$ . es_funcs.bash
NOTE: The arrays at the top of the file map short names for clusters if you happen to have multiple. We have 2, one for our lab and one for our production. So I represented those as l and p.
You can then run them like this to enable the read_only_allow_delete attribute (true) on your l cluster:
$ enable_readonly_idxs l
{"acknowledged":true}
or p:
$ enable_readonly_idxs p
{"acknowledged":true}
Helper Script Overview
There's one additional script that contains the curl commands which I use to interact with the clusters. This script is referenced in the escmd array at the top of the es_func.bash file. The array contains names of symlinks to a single shell script, escli.bash. The links are called esl and esp.
$ ll
-rw-r--r-- 1 smingolelli staff 9035 Apr 10 23:38 es_funcs.bash
-rwxr-xr-x 1 smingolelli staff 1626 Apr 10 23:02 escli.bash
-rw-r--r-- 1 smingolelli staff 338 Apr 5 00:27 escli.conf
lrwxr-xr-x 1 smingolelli staff 10 Jan 23 08:12 esl -> escli.bash
lrwxr-xr-x 1 smingolelli staff 10 Jan 23 08:12 esp -> escli.bash
The escli.bash script:
$ cat escli.bash
#!/bin/bash
#------------------------------------------------
# Detect how we were called [l|p]
#------------------------------------------------
[[ $(basename $0) == "esl" ]] && env="lab1" || env="rdu1"
#------------------------------------------------
# source escli.conf variables
#------------------------------------------------
# g* tools via brew install coreutils
[ $(uname) == "Darwin" ] && readlink=greadlink || readlink=readlink
. $(dirname $($readlink -f $0))/escli.conf
usage () {
cat <<-EOF
USAGE: $0 [HEAD|GET|PUT|POST] '...ES REST CALL...'
EXAMPLES:
$0 GET '_cat/shards?pretty'
$0 GET '_cat/indices?pretty&v&human'
$0 GET '_cat'
$0 GET ''
$0 PUT '_all/_settings' -d "\$DATA"
$0 POST '_cluster/reroute' -d "\$DATA"
EOF
exit 1
}
[ "$1" == "" ] && usage
#------------------------------------------------
# ...ways to call curl.....
#------------------------------------------------
if [ "${1}" == "HEAD" ]; then
curl -I -skK \
<(cat <<<"user = \"$( ${usernameCmd} ):$( ${passwordCmd} )\"") \
"${esBaseUrl}/$2"
elif [ "${1}" == "PUT" ]; then
curl -skK \
<(cat <<<"user = \"$( ${usernameCmd} ):$( ${passwordCmd} )\"") \
-X$1 -H "${contType}" "${esBaseUrl}/$2" "$3" "$4"
elif [ "${1}" == "POST" ]; then
curl -skK \
<(cat <<<"user = \"$( ${usernameCmd} ):$( ${passwordCmd} )\"") \
-X$1 -H "${contType}" "${esBaseUrl}/$2" "$3" "$4"
else
curl -skK \
<(cat <<<"user = \"$( ${usernameCmd} ):$( ${passwordCmd} )\"") \
-X$1 "${esBaseUrl}/$2" "$3" "$4" "$5"
fi
This script takes a single property file, escli.conf. In this file you specify the commands to retrieve your username + password from whereever, I use LastPass for that so retrieve them via lpass as well as setting the base URL to use for accessing your clusters REST API.
$ cat escli.conf
#################################################
### props used by escli.bash
#################################################
usernameCmd='lpass show --username somedom.com'
passwordCmd='lpass show --password somedom.com'
esBaseUrl="https://es-data-01a.${env}.somdom.com:9200"
contType="Content-Type: application/json"
I've put all this together in a Github repo (linked below) which also includes additional functions beyond the above 2 that I'm showing as examples for this question.
References
https://github.com/slmingol/escli

Merge two json in bash (no jq)

I have two jsons :
env.json
{
"environment":"INT"
}
roles.json
{
"run_list":[
"recipe[splunk-dj]",
"recipe[tideway]",
"recipe[AlertsSearch::newrelic]",
"recipe[AlertsSearch]"
]
}
expected output should be some thing like this :
{
"environment":"INT",
"run_list":[
"recipe[splunk-dj]",
"recipe[tideway]",
"recipe[AlertsSearch::newrelic]",
"recipe[AlertsSearch]"
]
}
I need to merge these two json (and other like these two) into one single json using only available inbuilt bash commands.
only have sed, cat, echo, tail, wc at my disposal.
Tell whoever put the constraint "bash only" on the project that bash is not sufficient for processing JSON, and get jq.
$ jq --slurp 'add' env.json roles.json
I couldn't use jq either as I was limited due to client's webhost jailing the user on the command line with limited binaries as most discount/reseller web hosting companies do. Luckily they usually have PHP available and you can do a oneliner command like this which something like what I would place in my install/setup bash script for example.
php -r '$json1 = "./env.json";$json2 = "./roles.json";$data = array_merge(json_decode(file_get_contents($json1), true),json_decode(file_get_contents($json2),true));echo json_encode($data, JSON_PRETTY_PRINT);'
For clarity php -r accepts line feeds as well so using this also works.
php -r '
$json1 = "./env.json";
$json2 = "./roles.json";
$data = array_merge(json_decode(file_get_contents($json1), true), json_decode(file_get_contents($json2), true));
echo json_encode($data, JSON_PRETTY_PRINT);'
Output
{
"environment": "INT",
"run_list": [
"recipe[splunk-dj]",
"recipe[tideway]",
"recipe[AlertsSearch::newrelic]",
"recipe[AlertsSearch]"
]
}
A little bit hacky, but hopefully will do.
env_lines=`wc -l < $1`
env_output=`head -n $(($env_lines - 1)) $1`
roles_lines=`wc -l < $2`
roles_output=`tail -n $(($roles_lines - 1)) $2`
echo "$env_output" "," "$roles_output"

Curl to return http status code along with the response

I use curl to get http headers to find http status code and also return response. I get the http headers with the command
curl -I http://localhost
To get the response, I use the command
curl http://localhost
As soon as use the -I flag, I get only the headers and the response is no longer there. Is there a way to get both the http response and the headers/http status code in in one command?
I was able to get a solution by looking at the curl doc which specifies to use - for the output to get the output to stdout.
curl -o - -I http://localhost
To get the response with just the http return code, I could just do
curl -o /dev/null -s -w "%{http_code}\n" http://localhost
the verbose mode will tell you everything
curl -v http://localhost
I found this question because I wanted independent access to BOTH the response and the content in order to add some error handling for the user.
Curl allows you to customize output. You can print the HTTP status code to std out and write the contents to another file.
curl -s -o response.txt -w "%{http_code}" http://example.com
This allows you to check the return code and then decide if the response is worth printing, processing, logging, etc.
http_response=$(curl -s -o response.txt -w "%{http_code}" http://example.com)
if [ $http_response != "200" ]; then
# handle error
else
echo "Server returned:"
cat response.txt
fi
The %{http_code} is a variable substituted by curl. You can do a lot more, or send code to stderr, etc. See curl manual and the --write-out option.
-w, --write-out
Make curl display information on stdout after a completed
transfer. The format is a string that may contain plain
text mixed with any number of variables. The format can be
specified as a literal "string", or you can have curl read
the format from a file with "#filename" and to tell curl
to read the format from stdin you write "#-".
The variables present in the output format will be
substituted by the value or text that curl thinks fit, as
described below. All variables are specified as
%{variable_name} and to output a normal % you just write
them as %%. You can output a newline by using \n, a
carriage return with \r and a tab space with \t.
The output will be written to standard output, but this
can be switched to standard error by using %{stderr}.
https://man7.org/linux/man-pages/man1/curl.1.html
I use this command to print the status code without any other output. Additionally, it will only perform a HEAD request and follow the redirection (respectively -I and -L).
curl -o -I -L -s -w "%{http_code}" http://localhost
This makes it very easy to check the status code in a health script:
sh -c '[ $(curl -o -I -L -s -w "%{http_code}" http://localhost) -eq 200 ]'
The -i option is the one that you want:
curl -i http://localhost
-i, --include Include protocol headers in the output (H/F)
Alternatively you can use the verbose option:
curl -v http://localhost
-v, --verbose Make the operation more talkative
I have used this :
request_cmd="$(curl -i -o - --silent -X GET --header 'Accept: application/json' --header 'Authorization: _your_auth_code==' 'https://example.com')"
To get the HTTP status
http_status=$(echo "$request_cmd" | grep HTTP | awk '{print $2}')
echo $http_status
To get the response body I've used this
output_response=$(echo "$request_cmd" | grep body)
echo $output_response
This command
curl http://localhost -w ", %{http_code}"
will get the comma separated body and status; you can split them to get them out.
You can change the delimiter as you like.
This is a way to retrieve the body "AND" the status code and format it to a proper json or whatever format works for you. Some may argue it's the incorrect use of write format option but this works for me when I need both body and status code in my scripts to check status code and relay back the responses from server.
curl -X GET -w "%{stderr}{\"status\": \"%{http_code}\", \"body\":\"%{stdout}\"}" -s -o - “https://github.com” 2>&1
run the code above and you should get back a json in this format:
{
"status" : <status code>,
"body" : <body of response>
}
with the -w write format option, since stderr is printed first, you can format your output with the var http_code and place the body of the response in a value (body) and follow up the enclosing using var stdout. Then redirect your stderr output to stdout and you'll be able to combine both http_code and response body into a neat output
To get response code along with response:
$ curl -kv https://www.example.org
To get just response code:
$ curl -kv https://www.example.org 2>&1 | grep -i 'HTTP/1.1 ' | awk '{print $3}'| sed -e 's/^[ \t]*//'
2>&1: error is stored in output for parsing
grep: filter the response code line from output
awk: filters out the response code from response code line
sed: removes any leading white spaces
For programmatic usage, I use the following :
curlwithcode() {
code=0
# Run curl in a separate command, capturing output of -w "%{http_code}" into statuscode
# and sending the content to a file with -o >(cat >/tmp/curl_body)
statuscode=$(curl -w "%{http_code}" \
-o >(cat >/tmp/curl_body) \
"$#"
) || code="$?"
body="$(cat /tmp/curl_body)"
echo "statuscode : $statuscode"
echo "exitcode : $code"
echo "body : $body"
}
curlwithcode https://api.github.com/users/tj
It shows following output :
statuscode : 200
exitcode : 0
body : {
"login": "tj",
"id": 25254,
...
}
My way to achieve this:
To get both (header and body), I usually perform a curl -D- <url> as in:
$ curl -D- http://localhost:1234/foo
HTTP/1.1 200 OK
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: application/json
Date: Wed, 29 Jul 2020 20:59:21 GMT
{"data":["out.csv"]}
This will dump headers (-D) to stdout (-) (Look for --dump-header in man curl).
IMHO also very handy in this context:
I often use jq to get that json data (eg from some rest APIs) formatted. But as jq doesn't expect a HTTP header, the trick is to print headers to stderr using -D/dev/stderr. Note that this time we also use -sS (--silent, --show-errors) to suppress the progress meter (because we write to a pipe).
$ curl -sSD/dev/stderr http://localhost:1231/foo | jq .
HTTP/1.1 200 OK
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: application/json
Date: Wed, 29 Jul 2020 21:08:22 GMT
{
"data": [
"out.csv"
]
}
I guess this also can be handy if you'd like to print headers (for quick inspection) to console but redirect body to a file (eg when its some kind of binary to not mess up your terminal):
$ curl -sSD/dev/stderr http://localhost:1231 > /dev/null
HTTP/1.1 200 OK
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: application/json
Date: Wed, 29 Jul 2020 21:20:02 GMT
Be aware: This is NOT the same as curl -I <url>! As -I will perform a HEAD request and not a GET request (Look for --head in man curl. Yes: For most HTTP servers this will yield same result. But I know a lot of business applications which don't implement HEAD request at all ;-P
A one-liner, just to get the status-code would be:
curl -s -i https://www.google.com | head -1
Changing it to head -2 will give the time as well.
If you want a while-true loop over it, it would be:
URL="https://www.google.com"
while true; do
echo "------"
curl -s -i $URL | head -2
sleep 2;
done
Which produces the following, until you do cmd+C (or ctrl+C in Windows).
------
HTTP/2 200
date: Sun, 07 Feb 2021 20:03:38 GMT
------
HTTP/2 200
date: Sun, 07 Feb 2021 20:03:41 GMT
------
HTTP/2 200
date: Sun, 07 Feb 2021 20:03:43 GMT
------
HTTP/2 200
date: Sun, 07 Feb 2021 20:03:45 GMT
------
HTTP/2 200
date: Sun, 07 Feb 2021 20:03:47 GMT
------
HTTP/2 200
date: Sun, 07 Feb 2021 20:03:49 GMT
A clear one to read using pipe
function cg(){
curl -I --silent www.google.com | head -n 1 | awk -F' ' '{print $2}'
}
cg
# 200
Welcome to use my dotfile script here
Explanation
--silent: Don't show progress bar when using pipe
head -n 1: Only show the first line
-F' ': separate text by columns using separator space
'{print $2}': show the second column
Some good answers here, but like the OP I found myself wanting, in a scripting context, all of:
any response body returned by the server, regardless of the response status-code: some services will send error details e.g. in JSON form when the response is an error
the HTTP response code
the curl exit status code
This is difficult to achieve with a single curl invocation and I was looking for a complete solution/example, since the required processing is complex.
I combined some other bash recipes on multiplexing stdout/stderr/return-code with some of the ideas here to arrive at the following example:
{
IFS= read -rd '' out
IFS= read -rd '' http_code
IFS= read -rd '' status
} < <({ out=$(curl -sSL -o /dev/stderr -w "%{http_code}" 'https://httpbin.org/json'); } 2>&1; printf '\0%s' "$out" "$?")
Then the results can be found in variables:
echo out $out
echo http_code $http_code
echo status $status
Results:
out { "slideshow": { "author": "Yours Truly", "date": "date of publication", "slides": [ { "title": "Wake up to WonderWidgets!", "type": "all" }, { "items": [ "Why <em>WonderWidgets</em> are great", "Who <em>buys</em> WonderWidgets" ], "title": "Overview", "type": "all" } ], "title": "Sample Slide Show" } }
http_code 200
status 0
The script works by multiplexing the output, HTTP response code and curl exit status separated by null characters, then reading these back into the current shell/script. It can be tested with curl requests that would return a >=400 response code but also produce output.
Note that without the -f flag, curl won't return non-zero error codes when the server returns an abnormal HTTP response code i.e. >=400, and with the -f flag, server output is suppresses on error, making use of this flag for error-detection and processing unattractive.
Credits for the generic read with IFS processing go to this answer: https://unix.stackexchange.com/a/430182/45479 .
In my experience we usually use curl this way
curl -f http://localhost:1234/foo || exit 1
curl: (22) The requested URL returned error: 400 Bad Request
This way we can pipe the curl when it fails, and it also shows the status code.
Append a line "http_code:200" at the end, and then grep for the keyword "http_code:" and extract the response code.
result=$(curl -w "\nhttp_code:%{http_code}" http://localhost)
echo "result: ${result}" #the curl result with "http_code:" at the end
http_code=$(echo "${result}" | grep 'http_code:' | sed 's/http_code://g')
echo "HTTP_CODE: ${http_code}" #the http response code
In this case, you can still use the non-silent mode / verbose mode to get more information about the request such as the curl response body.
Wow so many answers, cURL devs definitely left it to us as a home exercise :) Ok here is my take - a script that makes the cURL working as it's supposed to be, i.e.:
show the output as cURL would.
exit with non-zero code in case of HTTP response code not in 2XX range
Save it as curl-wrapper.sh:
#!/bin/bash
output=$(curl -w "\n%{http_code}" "$#")
res=$?
if [[ "$res" != "0" ]]; then
echo -e "$output"
exit $res
fi
if [[ $output =~ [^0-9]([0-9]+)$ ]]; then
httpCode=${BASH_REMATCH[1]}
body=${output:0:-${#httpCode}}
echo -e "$body"
if (($httpCode < 200 || $httpCode >= 300)); then
# Remove this is you want to have pure output even in
# case of failure:
echo
echo "Failure HTTP response code: ${httpCode}"
exit 1
fi
else
echo -e "$output"
echo
echo "Cannot get the HTTP return code"
exit 1
fi
So then it's just business as usual, but instead of curl do ./curl-wrapper.sh:
So when the result falls in 200-299 range:
./curl-wrapper.sh www.google.com
# ...the same output as pure curl would return...
echo $?
# 0
And when the result is out of in 200-299 range:
./curl-wrapper.sh www.google.com/no-such-page
# ...the same output as pure curl would return - plus the line
# below with the failed HTTP code, this line can be removed if needed:
#
# Failure HTTP response code: 404
echo $?
# 1
Just do not pass "-w|--write-out" argument since that's what added inside the script
I used the following way of getting both return code as well as response body in the console.
NOTE - use tee which append the output into a file as well as to the console, which solved my purpose.
Sample CURL call for reference:
curl -s -i -k --location --request POST ''${HOST}':${PORT}/api/14/project/'${PROJECT_NAME}'/jobs/import' \
--header 'Content-Type: application/yaml' \
--header 'X-Rundeck-Auth-Token: '${JOB_IMPORT_TOKEN}'' \
--data "$(cat $yaml_file)" &>/dev/stdout | tee -a $response_file
return_code=$(cat $response_file | head -3 | tail -1 | awk {'print $2'})
if [ "$return_code" != "200" ]; then
echo -e "\Job import api call failed with rc: $return_code, please rerun or change pipeline script."
exit $return_code
else
echo "Job import api call completed successfully with rc: $return_code"
fi
Hope this would help a few.
To capture only response:
curl --location --request GET "http://localhost:8000"
To capture the response and its statuscode:
curl --location --request GET "http://localhost:8000" -w "%{http_code}"
To capture the response in a file:
curl --location --request GET "http://localhost:8000" -s -o "response.txt"
while : ; do curl -sL -w "%{http_code} %{url_effective}\\n" http://host -o /dev/null; done
This works for me:
curl -Uri 'google.com' | select-object StatusCode

How to capture actual response code and response body with HTTPie in bash script?

I have a bash script to call several APIs using HTTPie. I want to capture both the response body AND the HTTP status code.
Here is the best I have managed so far:
rspBody=$( http $URL --check-status --ignore-stdin )
statusCode=$?
Command substitution lets me get the body, and the "--check-status" flag gives me a simplified code (such as 0, 3, 4, etc) corresponding to the code family.
The problem is I need to distinguish between say a 401 and a 404 code, but I only get 4.
Is there a way to get the actual status code without having to do a verbose dump into a file and parse for stuff?
[edit]
This is my workaround in case it helps anyone, but I'd still like a better idea if you have one:
TMP=$(mktemp)
FLUSH_RSP=$( http POST ${CACHE_URL} --check-status --ignore-stdin 2> "$TMP")
STAT_FAMILY=$?
flush_err=$(cat "$TMP" | awk '{
where = match($0, /[0-9]+/)
if (where) {
print substr($0, RSTART, RLENGTH);
}
}' -)
rm "$TMP"
STDERR contains a (usually) 3-line message with the HTTP code in it, so I dump that to a temp file and am still able to capture the response body (from STDOUT) in a variable.
I then parse that temp file looking for a number, but this seems fragile to me.
There's no ready-made solution as such for this but it's achievable with a bit of scripting. For example:
STATUS=$(http -hdo ./body httpbin.org/get 2>&1 | grep HTTP/ | cut -d ' ' -f 2)
BODY=$(cat ./body)
rm ./body
echo $STATUS
# 200
echo $BODY
# { "args": {}, "headers": { "Accept": "*/*", "Accept-Encoding": "identity", "Host": "httpbin.org", "User-Agent": "HTTPie/1.0.0-dev" }, "origin": "84.242.118.58", "url": "http://httpbin.org/get" }
Explanation of the command:
http --headers \ # Print out the response headers (they would otherwise be supressed for piped ouput)
--download \ # Enable download mode
--output=./body \ # Save response body to this file
httpbin.org/get 2>&1 \ # Merge STDERR into STDOUT (with --download, console output otherwise goes to STDERR)
| grep HTTP/ \ # Find the Status-Line
| cut -d ' ' -f 2 # Get the status code
https://gist.github.com/jakubroztocil/ad06f159c5afbe278b5fcfa8bf3b5313

Consul - Alert if drive is full

In the demo of consul, there are checks for disk utilization and memory utilization.
http://demo.consul.io/ui/#/ams2/nodes/ams2-server-1
How could you write a configuration to do what the demo shows? Warning at 10% and critical erros at 5% ?
Here is what I am trying
{
"check": {
"name": "Disk Util",
"script": "disk_util=$(df -k | grep '/dev/sda1' | awk '{print $5}' | sed 's/[^0-9]*//g' ) | if [ $disk_util > 90 ] ; then echo 'Disk /dev/sda above 90% full' && exit 1; elif [ $disk_util > 80 ] ; then echo 'Disk /dev/sda above 80%' && exit 3; else exit 0; fi",
"interval": "2m"
}
}
Here is the same script, but more human readable
disk_util=$(df -k | grep '/dev/sda1' | awk '{print $5}' | sed 's/[^0-9]*//g' ) |
if [ $disk_util > 90 ]
then echo 'Disk /dev/sda above 90% full' && exit 1
elif [ $disk_util > 80 ]
then echo 'Disk /dev/sda above 80%' && exit 3
else exit 0; fi
It seems like the check is working, but it doesn't print out any text. How can I verify this is working, and print output?
The output that you are seeing is produced by Nagios plugin check_disk (https://www.monitoring-plugins.org/doc/man/check_disk.html)
The "Output" field gets populated by stdout of the check. Your check runs cleanly and produces no output. So you see nothing.
To add some notes just add a "notes" field in the check definition as outlined in the documentation: https://www.consul.io/docs/agent/checks.html
Your check json file would look something like this:
{
"check": {
"name": "disks",
"notes": "Critical 5%, warning 10% free",
"script": "/path/to/check_disk -w 10% -c 5%",
"interval": "2m"
}
}
Exit code for your warning state should be 1, for critical, 2 or higher. (See "Check Scripts" at https://www.consul.io/docs/agent/checks.html), so you likely want to swap your exit lines.
Your 'OK' state (disk use < 80%) does not give any output, which is most likely why you see blank output.
I second the notion of using nagios plugins rather than rolling your own. Many OSes will have a nagios-plugins package(s) that are a yum/apt install away.
Health checks rely on the exit code of the check. To test if the health checks are being read by the Consul server you could write a script that always exits with a 1, and then you will see the health check as failed. Then replace it with a script that always returns 0 and you should see the health check as passed.
If you want to return text to the ui, add an output field to the json.
It seem consul analyse stdout only and not stderr. I have tested with redirect ( 2>&1 ) in service check file configuration. That seem work !
JSON config
{
"check": {
"name": "disks",
"notes": "Critical 5%, warning 10% free",
"script": "/path/to/check_disk -w 10% -c 5% 2>&1",
"interval": "2m"
}
}
Output result

Resources