I am using this bash script to post a new message to my rocket.chat instance.
#!/usr/bin/env bash
function usage {
programName=$0
echo "description: use this program to post messages to Rocket.chat channel"
echo "usage: $programName [-b \"message body\"] [-u \"rocket.chat url\"]"
echo " -b The message body"
echo " -u The rocket.chat hook url to post to"
exit 1
}
while getopts ":b:u:h" opt; do
case ${opt} in
u) rocketUrl="$OPTARG"
;;
b) msgBody="$OPTARG"
;;
h) usage
;;
\?) echo "Invalid option -$OPTARG" >&2
;;
esac
done
if [[ ! "${rocketUrl}" || ! "${msgBody}" ]]; then
echo "all arguments are required"
usage
fi
read -d '' payLoad << EOF
{"text": "${msgBody}"}
EOF
echo $payLoad
statusCode=$(curl \
--write-out %{http_code} \
--silent \
--output /dev/null \
-X POST \
-H 'Content-type: application/json' \
--data "${payLoad}" ${rocketUrl})
echo ${statusCode}
Everthings works fine, so i can send a new message like this
./postToRocket.sh -b "Hello from here" -u $RocketURL
But when i try to add a message with multiple lines like this
./postToRocket.sh -b "Hello from here\nThis is a new line" -u $RocketURL
it doesn't work. I get the following output:
{"text": "Hello from heren New Line"}
200
So what do i need to change, to use break line with these bash script. Any ideas?
First, the thing making the backslash in your \n disappear was the lack of the -r argument to read. Making it read -r -d '' payLoad will fix that. However, that's not a good solution: It requires your callers to pass strings already escaped for inclusion in JSON, instead of letting them pass any/every possible string.
To make valid JSON with an arbitrary string -- including one that can contain newline literals, quotes, backslashes, or other content that has to be escaped -- use jq:
payLoad=$(jq -n --arg msgBody "$msgBody" '{"text": $msgBody}')
...and then, after doing that, amend your calling convention:
./postToRocket.sh -b $'Hello from here\nThis is a new line' -u "$RocketURL"
I believe this has already been answered in SO here
Should work by adding the $ sign and using single quotes:
./postToRocket.sh -b $'Hello from here\nThis is a new line' -u $RocketURL
Related
I have the following BASH code:
response=$( curl -Ls $endpoint )
if [ -n "$response" ]; then # nonempty
echo "$response" | jq .
fi
The problem is that sometimes the response can be non-empty but not JSON (if it's not a 200).
Is it possible to pipe the output through jq ONLY if it is valid JSON?
The following works:
echo $x | jq . 2>/dev/null || echo $x
Test:
> x='{"foo":123}'; echo $x | jq . 2>/dev/null || echo "Invalid: $x"
{
"foo": 123
}
> x='}'; echo $x | jq . 2>/dev/null || echo "Invalid: $x"
Invalid: }
However, I don't feel comfortable with it.
If you want to test the response type before submitting it to jq, it is possible if you test the Content-Type header from the server's response.
So you want curl to send you the full response headers and body with curl -i.
Here is an implementation of it:
#!/usr/bin/env sh
endpoint='https://worldtimeapi.org/api/timezone/Europe/Paris.json'
# Headers and body are delimited by an empty line, with CRLF as the line ending.
# See: RFC7230 HTTP/1.1 Message Syntax and Routing / section 3: Message Format
# https://tools.ietf.org/html/rfc7230#section-3
crlf="$(printf '\r\n_')" # add trailing _ to prevent trailing newline trim
crlf="${crlf%_}" # remove trailing _
http_delim="$crlf$crlf" # RFC7230 section 3
full_http_response="$(curl --silent --include --url "$endpoint")"
http_headers="${full_http_response%$http_delim*}"
http_body="${full_http_response#*$http_delim}"
case $http_headers in
'HTTP/1.1 200 OK'*'Content-Type: application/json'*)
# Yes, response body is JSON, so process it with jq.
jq -n "$http_body"
;;
esac
The following works:
echo $x | jq . 2>/dev/null || echo $x
Except for the use of echo here, this is actually a good approach - it has the advantages of both simplicity and efficiency. It is better than using the -e option naively as the return codes produced by -e are more complex.
In other words, there is much to be said for:
printf "%s" "$x" | jq . 2> /dev/null || printf "%s\n" "$x"
Efficiency
The argument for efficiency is as follows:
If $x holds valid JSON, then there is no overhead.
If $x is invalid as JSON, jq will quickly fail; in this case also, the overhead of calling jq will almost surely be no worse or not much worse than checking the Content-Type.
Warning
The official documentation for the return codes produced by jq when invoked without the -e option is not strictly correct, as illustrated by:
$ jq empty <<< 'foo bat' 2> /dev/null ; echo $?
4
This works:
response=$( curl -Ls -H 'Cache-Control: max-age=0' $endpoint )
if [ -n "$response" ]; then # nonempty
echo "Got server response"
# https://stackoverflow.com/questions/46954692/check-if-string-is-a-valid-json-with-jq
if jq --exit-status type >/dev/null 2>&1 <<<"$response"; then
# Parsed JSON successfully and got something other than false/null
echo "$response" | jq .
echo "... after $i seconds"
return 0
else
echo "Response is not valid JSON"
echo "$response"
return 1
fi
fi
I'm using mosquitto on an openWRT device to receive some data from a server and then send this same data to a local printer to print this data.
I'm using this script to receive the data
mosquitto_sub -h "${HOST}" -k 30 -c -p 8883 -t "${TOPIC}" -u "${USERNAME}" -P "${PASSWORD}" --id "${ID}" | /bin/sh /bin/printer_execute "${TOPIC}" "${PRINTER}" "${USERNAME}" "${PASSWORD}"
And the printer_execute code:
#!/bin/sh
TOPIC="${1}"
PRINTER="${2}"
USERNAME="${3}"
PASSWORD="${4}"
while read MSG
do
echo "input: ${MSG}"
echo "INPUT MSG: " "${MSG}" >> /root/log
RES=`curl -m 2 --header "Content-Type: text/xml;charset=UTF-8" --header "SOAPAction: ''" --header "If-Modified-Since: Thu, 01 Jan 1970 00:00:00 GMT" --data "${MSG}" "http://${PRINTER}/cgi-bin/epos/service.cgi?devid=local_printer&timeout=5000"`
mosquitto_pub -h ${HOST_PLACEHOLDER} -p 8883 -t "${TOPIC}-response" -m "${RES}" -u "${USERNAME}" -P "${PASSWORD}"
echo "RESULT CURL: " "${RES}" >> /root/log
done
This solution works with a relatively low messages per second, but when the volume is too high the printer_execute code stop working. I'm pretty new to shell scripting and I guess the problem could be caused by the pipe and while read pattern or by the while exit condition, but i'm not really sure.
Anyone has some idea or found a similar problem and know how to solve this?
EDIT:
In light of the answers i have tried to do this:
EDIT2: Sorry in the first edit i just added what i modified but the entire script is like that and the scope should be correct for the variables.
#!/bin/sh
TOPIC="${1}"
PRINTER="${2}"
USERNAME="${3}"
PASSWORD="${4}"
PrintOne(){
MSG="${1}"
RES=$(curl [params])
mosquitto_pub -h [host] -p 8883 -d -t "${TOPIC}-response" -m "${RES}" -u "${USERNAME}" -P "${PASSWORD}"
echo "RESULT CURL: " "${RES}" >> /root/log
}
while read msg ; do
PrintOne "$msg" &
done
With the printone and the appersand this take one message and stop working, without the & it's just like it was before.
You could try making a function to handle one message and calling that in the background (by appending an ampersand) so that you can respond quickly and in parallel - and that will allow you to take longer to handle each message... for a period. If your messages continually arrive faster than you can handle them, there will inevitably be a backlog.
Something like this:
#!/bin/bash
PrintOne(){
echo "Received $1"
curl ...
mosquitto_pub ...
echo $RESULT
}
while read msg ; do
PrintOne "$msg" &
done
If you want a little example. change the code to this and save it as go, and make it executable with chmod +x go
#!/bin/bash
PrintOne(){
echo "Received $1"
sleep 2
echo "Finished $1"
}
while read msg ; do
PrintOne "$msg" &
done
Now send it 10 lines:
seq 10 | ./go
Then remove the ampersand, and do exactly the same thing again and you will see the difference.
A more complete version of my answer is as follows:
#!/bin/bash
PrintOne(){
TOPIC="${1}"
PRINTER="${2}"
USERNAME="${3}"
PASSWORD="${4}"
curl ...
mosquitto_pub ...
echo $RESULT
}
while read msg ; do
PrintOne "${1}" "${2}" "${3}" "${4}" &
done
I have an app that uses ActiveMQ, and typically, I test it by using AMQ's web UI to send messages to queues that my software is consuming from.
I'd like to semi-automate this and was hoping AMQ's command line has the capability to send a message to a specific queue by either providing that message as text in the command invocation, or ideally, reading it out of a file.
Examples:
./activemq-send queue="my-queue" messageFile="~/someMessage.xml"
or:
./activemq-send queue="my-queue" message="<someXml>...</someXml>"
Is there any way to do this?
You could use the "A" utility to do this.
a -b tcp://somebroker:61616 -p #someMessage.xml my-queue
Disclaimer: I'm the author of A, wrote it once to do just this thing. There are other ways as well, such as the REST interface, a Groovy script and whatnot.
ActiveMQ has a REST interface that you can send messages to from the command line, using, for example, the curl utility.
Here is a script I wrote and use for this very purpose:
#!/bin/bash
#
#
# Sends a message to the message broker on localhost.
# Uses ActiveMQ's REST API and the curl utility.
#
if [ $# -lt 2 -o $# -gt 3 ] ; then
echo "Usage: msgSender (topic|queue) DESTINATION [ FILE ]"
echo " Ex: msgSender topic myTopic msg.json"
echo " Ex: msgSender topic myTopic <<< 'this is my message'"
exit 2
fi
UNAME=admin
PSWD=admin
TYPE=$1
DESTINATION=$2
FILE=$3
BHOST=${BROKER_HOST:-'localhost'}
BPORT=${BROKER_REST_PORT:-'8161'}
if [ -z "$FILE" -o "$FILE" = "-" ] ; then
# Get msg from stdin if no filename given
( echo -n "body=" ; cat ) \
| curl -u $UNAME:$PSWD --data-binary '#-' --proxy "" \
"http://$BHOST:$BPORT/api/message/$DESTINATION?type=$TYPE"
else
# Get msg from a file
if [ ! -r "$FILE" ] ; then
echo "File not found or not readable"
exit 2
fi
( echo -n "body=" ; cat $FILE ) \
| curl -u $UNAME:$PSWD --data-binary '#-' --proxy "" \
"http://$BHOST:$BPORT/api/message/$DESTINATION?type=$TYPE"
fi
Based on Rob Newton's answer this is what i'm using to post a file to a queue. I also post a custom property (which is not possible trough the activemq webconsole)
( echo -n "body=" ; cat file.xml ) | curl --data-binary '#-' -d "customProperty=value" "http://admin:admin#localhost:8161/api/message/$QueueName?type=$QueueType"
I want to extend this example webserver shell script to handle multiple requests. Here is the example source:
#!/bin/sh
# based on https://debian-administration.org/article/371/A_web_server_in_a_shell_script
base=/srv/content
while /bin/true
do
read request
while /bin/true; do
read header
[ "$header" == $'\r' ] && break;
done
url="${request#GET }"
url="${url% HTTP/*}"
filename="$base$url"
if [ -f "$filename" ]; then
echo -e "HTTP/1.1 200 OK\r"
echo -e "Content-Type: `/usr/bin/file -bi \"$filename\"`\r"
echo -e "\r"
cat "$filename"
echo -e "\r"
else
echo -e "HTTP/1.1 404 Not Found\r"
echo -e "Content-Type: text/html\r"
echo -e "\r"
echo -e "404 Not Found\r"
echo -e "Not Found
The requested resource was not found\r"
echo -e "\r"
fi
done
Wrapping the code in a loop is insufficient because the browser doesn't render anything. How can I make this work ?
Application-specific reasons make launching the script per-request an unsuitable approach.
A TCP listener is required to accept browser connections and connect them to the script. I used socat to do this:
$ socat EXEC:./webserver TCP4-LISTEN:8080,reuseaddr,fork
This gives access to the server by pointing a browser at http://localhost:8080.
The browser needs to know how much data to expect, and it won't render anything until
it gets that data or the connection is closed by the server.
The HTTP response should include a Content-Length header or it should use a *chunked
* Transfer-Encoding.
The example script does not do that. However, it works because it processes a single request
and exits which causes the connection to close.
So, one way to solve the problem is to set a Content-Length header. Here is an example that works:
#!/bin/sh
# stdio webserver based on https://debian-administration.org/article/371/A_web_server_in_a_shell_script
respond_with() {
echo -e "HTTP/1.1 200 OK\r"
echo -e "Content-Type: text/html\r"
echo -e "Content-Length: ${#1}\r"
echo -e "\r"
echo "<pre>${1}</pre>"
echo -e "\r"
}
respond_not_found() {
content='<h1>Not Found</h1>
<p>The requested resource was not found</p>'
echo -e "HTTP/1.1 404 Not Found\r"
echo -e "Content-Type: text/html\r"
echo -e "Content-Length: ${#content}\r"
echo -e "\r"
echo "${content}"
echo -e "\r"
}
base='/var/www'
while /bin/true; do
read request
while /bin/true; do
read header
[ "$header" == $'\r' ] && break;
done
url="${request#GET }"
url="${url% HTTP/*}"
filename="$base/$url"
if [ -f "$filename" ]; then
respond_with "$(cat $filename)"
elif [ -d "$filename" ]; then
respond_with "$(ls -l $filename)"
else
respond_not_found
fi
done
Another solution is to make the script trigger the connection close. One way to do this is to send an escape code that socat can interpret as EOF.
For example, add a BELL character code (ASCII 7, \a) to the end of the response:
echo -e '\a'
and tell socat to interpret it as EOF:
$ socat EXEC:./webserver,escape=7 TCP4-LISTEN:8080,reuseaddr,fork
Any usually unused character will do, BELL is just an example.
Although the above will work, HTTP should really contain a content type or transfer encoding header. This alternative method may be useful if using a similar technique to serve arbitrary (non-HTTP) requests from a script.
I have a script that runs curl. I want to be able to optionally add a -H parameter, if a string isn't empty. What's complex is the levels of quoting and spaces.
caption="Test Caption"
if [ "${caption}" != "" ]; then
CAPT=-H "X-Caption: ${caption}"
fi
curl -A "$UA" -H "Content-MD5: $MD5" -H "X-SessionID: $SID" -H "X-Version: 1" $CAPT http://upload.example.com/$FN
The idea is that the CAPT variable is either empty, or contains the desired -H header in the same form as the others, e.g., -H "X-Caption: Test Caption"
The problem is when run, it interprets the assignment as a command to be executed:
$bash -x -v test.sh
+ '[' 'Test caption' '!=' '' ']'
+ CAPT=-H
+ 'X-Caption: Test caption'
./test.sh: line 273: X-Caption: Test caption: command not found
I've tried resetting IFS before the code, but it didn't make a difference.
The key to making this work is to use an array.
caption="Test Caption"
if [[ $caption ]]; then
CAPT=(-H "X-Caption: $caption")
fi
curl -A "$UA" -H "Content-MD5: $MD5" -H "X-SessionID: $SID" -H "X-Version: 1" "${CAPT[#]}" "http://upload.example.com/$FN"
If you only need to know whether or not the caption is there, you can interpolate it when it needs to be there.
caption="Test Caption"
NOCAPT="yeah, sort of, that would be nice"
if [ "${caption}" != "" ]; then
unset NOCAPT
fi
curl ${NOCAPT--H "X-Caption: ${caption}"} -A "$UA" ...
To recap, the syntax ${var-value} produces value if var is unset.
I finally did get it to work. Part of the problem is specific to curl, in that when using the -H option to set custom headers, it seems to work best when everything after the -H (that is, both the custom header name and value) are protected by single quotes. Then, I needed to pass the constructed string through eval to get it to work.
To make this easier to read, I store a single quote in a variable named TICK.
Example:
TICK=\'
#
HDRS=""
HDRS+=" -H ${TICK}Content-MD5: ${MD5}${TICK}"
HDRS+=" -H ${TICK}X-SessionID: ${SID}${TICK}"
HDRS+=" -H ${TICK}X-Version: 1.1.1${TICK}"
HDRS+=" -H ${TICK}X-ResponseType: REST${TICK}"
HDRS+=" -H ${TICK}X-ID: ${ID}${TICK}"
if [ "${IPTC[1]}" != "" ]; then
HDRS+=" -H ${TICK}X-Caption: ${IPTC[1]}${TICK}"
fi
if [ "${IPTC[2]}" != "" ]; then
HDRS+=" -H ${TICK}X-Keywords: ${IPTC[2]}${TICK}"
fi
#
# Set curl flags
#
CURLFLAGS=""
CURLFLAGS+=" --cookie $COOKIES --cookie-jar $COOKIES"
CURLFLAGS+=" -A \"$UA\" -T ${TICK}${the_file}${TICK} "
eval curl $CURLFLAGS $HDRS -o $OUT http://upload.example.com/$FN