I'm trying to send a notification via pushover using curl in a bash script.
I cannot get curl -F to interpret the line break correctly though.
curl -s \
-F "token=TOKEN" \
-F "user=USER" \
-F "message=Root Shell Access on HOST \n `date` \n `who` " \
https://api.pushover.net/1/messages.json > NUL
I've tried:
\n
\\\n
%A0
I'd rather push the message out directly, not through a file.
curl doesn't interpret backslash escapes, so you have to insert an actual newline into the argument which curl sees. In other words, you have to get the shell (bash in this case) to interpret the \n, or you need to insert a real newline.
A Posix standard shell does not interpret C escapes like \n, although the standard utility command printf does. However, bash does provide a way to do it: in the quotation form $'...' C-style backslash escapes will be interpreter. Otherwise, $'...' acts just like '...', so that parameter and command substitutions do not take place.
However, any shell -- including bash -- allows newlines to appear inside quotes, and the newline is just passed through as-is. So you could write:
curl -s \
-F "token=$TOKEN" \
-F "user=$USER" \
-F "message=Root Shell Access on $HOST
$(date)
$(who)
" \
https://api.pushover.net/1/messages.json > /dev/null
(Note: I inserted parameter expansions where it seemed like they were missing from the original curl command and changed the deprecated backtick command substitutions to the recommended $(...) form.)
The only problem with including literal newlines, as above, is that it messes up indentation, if you care about appearances. So you might prefer bash's $'...' form:
curl -s \
-F "token=$TOKEN" \
-F "user=$USER" \
-F "message=Root Shell Access on $HOST"$'\n'"$(date)"$'\n'"$(who)" \
https://api.pushover.net/1/messages.json > /dev/null
That's also a little hard to read, but it is completely legal. The shell allows a single argument ("word") to be composed of any number of quoted or unquoted segments, as long as there is no whitespace between the segments. But you can avoid the multiple quote syntax by predefining a variable, which some people find more readable:
NL=$'\n'
curl -s \
-F "token=$TOKEN" \
-F "user=$USER" \
-F "message=Root Shell Access on $HOST$NL$(date)$NL$(who)" \
https://api.pushover.net/1/messages.json > /dev/null
Finally, you could use the standard utility printf, if you are more used to that style:
curl -s \
-F "token=$TOKEN" \
-F "user=$USER" \
-F "$(printf "message=Root Shell Access on %s\n%s\n%s\n" \
"$HOST" "$(date)" "$(who)")" \
https://api.pushover.net/1/messages.json > /dev/null
Related
I have a variable that has a command that I want to run.
It has a bunch of double-quotes. when I echo it, it looks beautiful.
I can copy-paste it and run it just fine.
I tried simply $cmd, but it doesn't work. I get an error as if the command is malformed.
I then tried running it via eval "$cmd" or similarly, bash -c "$cmd", which works, but I don't get any output until the command is done running.
Example with bash -c "$cmd":
This runs the command, BUT I don't get any output until the command is done running, which sucks and I'm trying to fix that:
cmd="docker run -v \"$PROJECT_DIR\":\"$PROJECT_DIR\" \
-v \"$PPI_ROOT_DIR/utilities/build_deploy/terraform/modules/\":/ppi_modules \
--workdir \"$PROJECT_DIR/terraform\" \
--env TF_VAR_aws_account_id=$AWS_ACCOUNT_ID \
--env TF_VAR_environment=${ENVIRONMENT} \
--env TF_VAR_region=${AWS_DEFAULT_REGION:-us-west-2} \
${OPTIONAL_AWS_ENV_VARS} \
${CUSTOM_TF_VARS} \
${TERRAFORM_BASE_IMAGE} \
init --plugin-dir=/.terraform/providers \
-reconfigure \
-backend-config=\"bucket=${AWS_ACCOUNT_ID}-tf-remote-state\" \
-backend-config=\"key=${ENVIRONMENT}/${PROJECT_NAME}\" \
-backend-config=\"region=us-west-2\" \
-backend-config=\"dynamodb_table=terraform-locks\" \
-backend=true"
# command output looks good. I can copy and paste it and run it my terminal too.
echo $cmd
# Running the command via bash works,
# but I don't get the output until the command is done running,
# which is what I'm trying to fix:
bash -c "$cmd"
Here is an example using bash array.
It prints it to screen perfectly, but just like running it like $cmd, it throws an error as if the command is malformed:
cmd=(docker run -v \"$PROJECT_DIR\":\"$PROJECT_DIR\" \
-v \"$PPI_ROOT_DIR/utilities/build_deploy/terraform/modules/\":/ppi_modules \
--workdir \"$PROJECT_DIR/terraform\" \
--env TF_VAR_aws_account_id=$AWS_ACCOUNT_ID \
--env TF_VAR_environment=${ENVIRONMENT} \
--env TF_VAR_region=${AWS_DEFAULT_REGION:-us-west-2} \
${OPTIONAL_AWS_ENV_VARS} \
${CUSTOM_TF_VARS} \
${TERRAFORM_BASE_IMAGE} \
init --plugin-dir=/.terraform/providers \
-reconfigure \
-backend-config=\"bucket=${AWS_ACCOUNT_ID}-tf-remote-state\" \
-backend-config=\"key=${ENVIRONMENT}/${PROJECT_NAME}\" \
-backend-config=\"region=us-west-2\" \
-backend-config=\"dynamodb_table=terraform-locks\" \
-backend=true)
echo "${cmd[#]}"
"${cmd[#]}"
How can I execute a bash variable that has double-quotes, but run it so I get the output in realtime, just as if I executed via $cmd (which doesn't work)
Similar to these questions, but my question is to run it AND get the output in realtime:
Execute command containing quotes from shell variable
Bash inserting quotes into string before execution
bash script execute command with double quotes, single quotes and spaces
In your array version, double quotes escaped by a backslash become part of the arguments, which is not intended.
So removing backslashes should fix the issue.
I am optionally including a user and password in a curl request as follows:
declare creds=""
if [ -n "$user" ] && [ -n "$password" ]; then
creds="-u ${user}:${password}"
fi
output=$(curl ${creds} -X PUT -v --write-out "%{http_code}" "$url" \
-H 'Content-Type: application/json' -s -o /dev/null --data "${payload}")
This seems to work fine, but I'm getting this shellcheck warning:
Double quote to prevent globbing and word splitting
https://github.com/koalaman/shellcheck/wiki/SC2086
Puting quotes around it doesn't work, e.g. if I do this:
output=$(curl "${creds}" -X PUT -v --write-out "%{http_code}" "$url" \
-H 'Content-Type: application/json' -s -o /dev/null --data "${payload}")
then when the username and password are not supplied, this results in empty double quotes in the curl request curl "" -X PUT ..., which generates a <url> malformed error.
I could use an if-else for the curl command, but I'd rather avoid the duplication. Is the above approach acceptable despite the shellcheck warning?
You were doing right in putting quotes around the variable, but shellcheck doesn't catch the issue of storing commands in a variable which has its own downfalls. Since this being a an issue with function of the shell, shellcheck can't quite catch it out-of-the-box. When you did below
creds="-u ${user}:${password}"
and quoted "$creds", it is passed as one single argument word to curl instead of being broken down to -u and "${user}:${password}" separately. The right approach should have been to use an array to store the commands and expand it, so that the words are preserved and not split by the shell (foremost reason to quote the variable, as indicated by shellcheck)
creds=(-u "${user}:${password}")
and invoke
curl "${creds[#]}" <rest-of-the-cmd>
Also explore the following
I'm trying to put a command in a variable, but the complex cases always fail!
How to store a command in a variable in a shell script?
I'm trying to run a script for pulling finance history from yahoo. Boris's answer from this thread
wget can't download yahoo finance data any more
works for me ~2 out of 3 times, but fails if the crumb returned from the cookie has a "\" character in it.
Code that sometimes works looks like this
#!usr/bin/sh
symbol=$1
today=$(date +%Y%m%d)
tomorrow=$(date --date='1 days' +%Y%m%d)
first_date=$(date -d "$2" '+%s')
last_date=$(date -d "$today" '+%s')
wget --no-check-certificate --save-cookies=cookie.txt https://finance.yahoo.com/quote/$symbol/?p=$symbol -O C:/trip/stocks/stocknamelist/crumb.store
crumb=$(grep 'root.*App' crumb.store | sed 's/,/\n/g' | grep CrumbStore | sed 's/"CrumbStore":{"crumb":"\(.*\)"}/\1/')
echo $crumb
fileloc=$"https://query1.finance.yahoo.com/v7/finance/download/$symbol?period1=$first_date&period2=$last_date&interval=1d&events=history&crumb=$crumb"
echo $fileloc
wget --no-check-certificate --load-cookies=cookie.txt $fileloc -O c:/trip/stocks/temphistory/hs$symbol.csv
rm cookie.txt crumb.store
But that doesn't seem to process in wget the way I intend either, as it seems to be interpreting as described here:
https://askubuntu.com/questions/758080/getting-scheme-missing-error-with-wget
Any suggestions on how to pass the $crumb variable into wget so that wget doesn't error out if $crumb has a "\" character in it?
Edited to show the full script. To clarify I've got cygwin installed with wget package. I call the script from cmd prompt as (example where the script above is named "stocknamedownload.sh, the stock symbol I'm downloading is "A" from the startdate 19800101)
c:\trip\stocks\StockNameList>bash stocknamedownload.sh A 19800101
This script seems to work fine - unless the crumb returned contains a "\" character in it.
The following implementation appears to work 100% of the time -- I'm unable to reproduce the claimed sporadic failures:
#!/usr/bin/env bash
set -o pipefail
symbol=$1
today=$(date +%Y%m%d)
tomorrow=$(date --date='1 days' +%Y%m%d)
first_date=$(date -d "$2" '+%s')
last_date=$(date -d "$today" '+%s')
# store complete webpage text in a variable
page_text=$(curl --fail --cookie-jar cookies \
"https://finance.yahoo.com/quote/$symbol/?p=$symbol") || exit
# extract the JSON used by JavaScript in the page
app_json=$(grep -e 'root.App.main = ' <<<"$page_text" \
| sed -e 's#^root.App.main = ##' \
-e 's#[;]$##') || exit
# use jq to extract the crumb from that JSON
crumb=$(jq -r \
'.context.dispatcher.stores.CrumbStore.crumb' \
<<<"$app_json" | tr -d '\r') || exit
# Perform our actual download
fileloc="https://query1.finance.yahoo.com/v7/finance/download/$symbol?period1=$first_date&period2=$last_date&interval=1d&events=history&crumb=$crumb"
curl --fail --cookie cookies "$fileloc" >"hs$symbol.csv"
Note that the tr -d '\r' is only necessary when using a native-Windows jq mixed with an otherwise native-Cygwin set of tools.
You are adding quotes to the value of the variable instead of quoting the expansion. You are also trying to use tools that don't know what JSON is to process JSON; use jq.
wget --no-check-certificate \
--save-cookies=cookie.txt \
"https://finance.yahoo.com/quote/$symbol/?p=$symbol" \
-O C:/trip/stocks/stocknamelist/crumb.store
# Something like thist; it's hard to reverse engineer the structure
# of crumb.store from your pipeline.
crumb=$(jq 'CrumbStore.crumb' crumb.store)
echo "$crumb"
fileloc="https://query1.finance.yahoo.com/v7/finance/download/$symbol?period1=$first_date&period2=$last_date&interval=1d&events=history&crumb=$crumb"
echo "$fileloc"
wget --no-check-certificate \
--load-cookies=cookie.txt "$fileloc" \
-O c:/trip/stocks/temphistory/hs$symbol.csv
This a quite annoying but rather a much simpler task. According to this guide, I wrote this:
#!/bin/bash
content=$(wget "https://example.com/" -O -)
ampersand=$(echo '\&')
xmllint --html --xpath '//*[#id="table"]/tbody' - <<<"$content" 2>/dev/null |
xmlstarlet sel -t \
-m "/tbody/tr/td" \
-o "https://example.com" \
-v "a//#href" \
-o "/?A=1" \
-o "$ampersand" \
-o "B=2" -n \
I successfully extract each link from the table and everything gets concatenated correctly, however, instead of reproducing the ampersand as & I receive this at the end of each link:
https://example.com/hello-world/?A=1\&B=2
But actually, I was looking for something like:
https://example.com/hello-world/?A=1&B=2
The idea is to escape the character using a backslash \& so that it gets ignored. Initially, I tried placing it directly into -o "\&" \ instead of -o "$ampersand" \ and removing ampersand=$(echo '\&') in this case scenario. Still the same result.
Essentially, by removing the backslash it still outputs:
https://example.com/hello-world/?A=1&B=2
Only that the \ behind the & is removed.
Why?
I'm sure it is something basic that is missing.
& is the correct way to print & in an XML document, but since you just want a plain URL your output should not be XML. Therefore you need to switch to text mode, by passing --text or -T to the sel command.
Your example input doesn't quite work because example.com doesn't have any table elements, but here is a working example building links from p elements instead.
content=$(wget 'https://example.com/' -O -)
xmlstarlet fo --html <<<"$content" |
xmlstarlet sel -T -t \
-m '//p[a]' \
--if 'not(starts-with(a//#href,"http"))' \
-o 'https://example.com/' \
--break \
-v 'a//#href' \
-o '/?A=1' \
-o '&' \
-o 'B=2' -n
The output is
http://www.iana.org/domains/example/?A=1&B=2
As you have already seen, backslash-escaping isn't the solution here. I can think of two possible options:
Extract the hrefs (probably don't need to be using both xmllint and xmlstarlet to do this), then just use a standard text processing tool such as sed to add the start and the end:
sed 's,^,https://example.com/,; s,$,/?A=1\&B=2,'
Alternatively, pipe the output of what you've currently got to xmlstarlet unesc, which will change & into &.
Sorry I can't reproduce your result but why don't make substitutions? Just filter your results through
sed 's/\\&/\&/g'
add it to your pipe. It should replace all & to &.
I have a cron issue with curl:
curl -w "%{time_total}\n" -o /dev/null -s http://myurl.com >> ~/log
works great and add a line in log file with total_time.
But the same line with cron doesn't do anything.
It's not a path problem because curl http://myurl.com >> ~/log works.
% is a special character for crontab. From man 5 crontab:
The "sixth" field (the rest of the line) specifies the command to be
run. The entire command portion of the line, up to a newline or a
"%" character, will be executed by /bin/sh or by the shell specified
in the SHELL variable of the cronfile. A "%" character in the
command, unless escaped with a backslash (\), will be changed into
newline characters, and all data after the first % will be sent to
the command as standard input.
So you need to escape the % character:
curl -w "%{time_total}\n" -o /dev/null -s http://myurl.com >> ~/log
to
curl -w "\%{time_total}\n" -o /dev/null -s http://myurl.com >> ~/log
^