Dashing set status of widget using curl - ruby

I'm not sure if this is a possibility but I am trying to update the status of a text widget in Dashing using curl.
The status I would like to update is 'warning' or 'danger' to reflect if a server has gone down or become unresponsive. My idea is that the dashboard will be populated with several green text widgets all saying online when the dashboard initialises. Periodically services running on other machines will post requests to the dashboard changing the status of widgets.
I have tried using curl to simulate the post messages from other machines and I'm able to update the text and title of the text widgets but have had no luck updating the status.
I have been using:
curl -d "{ \"auth_token\": \"YOUR_AUTH_TOKEN\", \"status\": \"danger\" }" -H "Content-Type: application/json" http://localhost:3030/widgets/frontend11
But the widget does not change colour. I have seen examples where the coffee script code was amended to include this possibility, but I thought that this functionality was included in all widgets?

We do this - changing status via curl - and it works great. Here's a snip of our code:
json='{ "auth_token": "'$dashing_auth_token'", "current": '$widget_value', "value": '$widget_value', "status": "'$widget_status'" }'
curl -H Content-Type:application/json -d "${json}" "${dashing_url}widgets/${widget_id}"
The above is in a function that gets passed all of the variables, but the variable names hopefully are easy enough to read there that you can make sense of it. I can write up more (or send the whole function) if it'd help you, but I think just the two lines should be enough to get you there without all the rest of the clutter. Let me know if more would be helpful.

Related

Is it possible to receive a gzipped response via elasticsearch-py?

I have an API (via hug) that sits between a UI and Elasticsearch.
The API uses elasticsearch-py to run searches, for example:
es = Elasticsearch([URL], http_compress=True)
#hug.post('/search')
def search(body):
return es.search(index='index', body=body)
This works fine; however, I cannot figure out how to obtain a compressed JSON result.
Elasticsearch is capable of this because a curl test checks out — the following returns a mess of characters to the console instead of JSON and this is what I want to emulate:
curl -X GET -H 'Accept-Encoding: gzip' <URL>/<INDEX>/_search
I've tried the approach here to modify HTTP headers, but interestingly enough the "Accept-Encoding": "gzip" header is already there: it just doesn't appear to be passed to Elastic because the result is always uncompressed.
Lastly, I'm passing http_compress=True when creating the Elastic instance; however, this only compresses the payload — not the result.
Has anyone had a similar struggle and figured it out?

Properly Nesting Arrays Within Objects In Bash

I'm building a Bash shell script to run as a post-build task in Jenkins.
This script sets some information conditionally and then sends it to Slack's Inbound Webhook API.
The information is sent via a cURL request, and the JSON object that I send can accept an attachments key which points to an array of objects.
This is how my code for the attachments array and overall JSON object
attachments="[\"{\"fallback\":\"This\u0020is\u0020a\u0020fallback\u0020message\u0020just\u0020in\u0020case\",\"color\":\"#36a64f\",\"author_name\":\"$author_name\",\"text\":\"$text\"}\"]"
json="{\"channel\":\"$channel\",\"username\":\"$username\",\"icon_emoji\":\"$emoji\",\"attachments\":\"$attachments\"}"
In Jenkins' console output, I'm seeing the following translation in my cURL request:
curl -X POST --data '{"channel":"#jenkinsslacktest","username":"Jenkins-Bot","icon_emoji":":rocket:","attachments":"["{"fallback":"This\u0020is\u0020a\u0020fallback\u0020message\u0020just\u0020in\u0020case","color":"#36a64f","author_name":"TestAuthor","text":":rocket:\u0020:rocket:\u0020SUCCESS!\u0020:rocket:\u0020:rocket:"}"]"}' https://hooks.slack.com/services/link-to-my-webhook
In the way it's formatted, my webhook does not trigger appropriately and no message is sent into my chosen Slack channel.
If I replaced the attachments key/value pair and just placed the text line in its place, this call is successful.
It seems like I'm just not escaping or formatting this attachments value properly. What should I be doing differently?
The only issue here is the JSON syntax of the attachments array. You need to remove the quotes around [ and ] to make it valid JSON syntax. Then it will work.
Here is the correct bash line:
curl -X POST --data '{"channel":"#jenkinsslacktest","username":"Jenkins-Bot","icon_emoji":":rocket:","attachments":[{"fallback":"This\u0020is\u0020a\u0020fallback\u0020message\u0020just\u0020in\u0020case","color":"#36a64f","author_name":"TestAuthor","text":":rocket:\u0020:rocket:\u0020SUCCESS!\u0020:rocket:\u0020:rocket:"}]}' https://hooks.slack.com/services/link-to-my-webhook
You can use a tool like JSONLint or JSONViewer to verify your JSON syntax is correct.

Tricks to figure out how to login to Dexcom's API to get blood sugars?

Any way to figure out tricks to login to an API by Dexcom? This article discusses an approach for accessing blood sugar values but the first step is to log into the API and it is unclear how that step was taken as this is an undocumented API.
We are trying to help a diabetic get access to her blood sugar values so she can code new insulin dosing algorithms. (She wants to prevent the crashes and spikes in her blood sugars that ruin her days and put her at risk for blindness and being on dialysis. She believes and hopes human + machine can do better than her alone.)
If we are able to login the next steps for retrieving the values seem clear:
curl -k -X POST "https://share1.dexcom.com/ShareWebServices/Services/Publisher/ReadPublisherLatestGlucoseValues?sessionID=GUID&minutes=1440&maxCount=1" -H "Accept: application/json" -H "Content-Length: 0"
Is there anything to try that could give us a clue as to how take the first step and login to this API?
Open the Dexcom clarity website
In your browser (let's assume Chrome for now) open the developer tools and go to the "Network" tab
Select the "data" XHR call. This is the "hidden API call" that populates the graphs in the Dexcom Clarity.
Right click and select "Copy > Copy as CURL (bash)". That will give you the CURL code that authenticates and logs into the website.
(Optional) Take the CURL command and paste it to https://curl.trillworks.com/ to translate the CURL code into some
other language
curl 'https://clarity.dexcom.com/api/subject/1522320180078ZZZZZZ/analysis_session/1560634749054XXXXXXX/data' -H 'Origin: https://clarity.dexcom.com'
-H 'Accept-Encoding: gzip, deflate, br'
-H ..... -H 'Referer: https://clarity.dexcom.com/' -H --data-binary '{"localDateTimeInterval":["2016-05-01/2016-07-29"]}' --compressed
Notice that you can change the data range by changing the values passed at localDateTimeInterval and get access to the full data in the account at a resolution of 5-min measurements. The returned JSON also includes all other events, such as calibrations, etc.

Difficulties using UNIX cURL to scrape Ajax Wicket Information

I am instructed to use write UNIX shell scripts that scrape certain websites. We use fiddler to trace the HTTP requests, then we write the cURLs accordingly. For the most part, scraping most websites seem to be fairly simple, however I've ran into a situation where I'm having difficulties capturing certain information.
I need to be somewhat generic in saying that I cannot provide the website address that I am actually looking at, however I can post some of the requests and responses to provide context.
Here's the situation:
The website starts with a search screen. You enter your search query and the website returns a list of results.
I need to choose the first result from the result page.
I need to capture EVERYTHING on the page from the first result.
Everything up until this point is working fine
Here's the problem:
The page returned has hyperlinks that are wickets. When these links are pressed, a window pops up within the page - it is not actually a window like a pop up created by javascript, it is more comparable to what you see when you 'compose a message' or 'poke' someone on Facebook ( am I the only one who still does that? ).
I need to capture the contents of that pop up window. There are usually multiple wicket links on a given page. Handling that should be easy enough with a loop, but I need to figure out the proper way to cURL those wickets first.
Here is the cURL i'm currently using to attempt to scrape the wickets.
(I'm explicitly defining the referrer URL, Accept, and Wicket-Ajax boolean as these were the items that were sent in the header when I traced the site). Link is the URL which looks like this:
http://www.someDomainName.com/searches/?x=as56f1sa65df1&random=0.121345151
( the random I believe is populated with some javascript, not sure if that's needed or even possible to recreate. I'm currently sending one of the randoms that I received on one particular occasion. ).
/bin/curl -v3 -b COOKIE -c COOKIE -H "Accept: text/xml" -H "Referer: $URL$x" -H "Wicket-Ajax: true" -sLf "$link"
Here is the response I get:
<ajax-response><redirect><![CDATA[home.page;jsessionid=6F45DF769D527B98DD1C7FFF3A0DF089]]></redirect>
</ajax-response>
I am expecting an XML document with actual content to be returned. Any insight into this issue would be greatly appreciated. Please let me know if you need more information.
Thanks,
Paul

How to I find all data from JSON POST request with curl

I am playing with the open data project at spogo.co.uk (sport england).
See here for a search example: https://spogo.co.uk/search#all/Football Pitch/near-london/range-5.
I have been using cygwin and curl to POST JSON data to the MVC controller. An example is below:
curl -i -X POST -k -H "Accept: application/json" -H "Content-Type: application/json; charset=utf-8" https://spogo.co.uk/search/all.json --data '{"searchString":"","address": {"latitude":55,"longitude":-3},"page":0}'
Question:
How can I find out what other variables can be included in the post data?
How can I return all results, rather than just 20 at a time? Cycling through page numbers doesn't deliver all at once.
AJAX is simply a technique of posting data over a connection asynchronously, JSON is just a string format that can contain data. Neither of which have built in mechanisms for querying information such as what fields are accepted or the amount of data returned.
You will want to check the web service documentation for on spogo.co.uk for these answers, IF their web service exposes such functionality they will be the final authority on what the commands and formats are.

Resources