SoftLayer_Virtual_Guest_Block_Device_Template_Group::editObject update datacenters - image

The createFromExternalSource method does not allow you to specify a list of datacenters to enable the image to be used. I can get the image imported, but then when I then go to edit the image template to augment the datacenters, I always get:
{"error":"Object does not exist to execute method on.
(SoftLayer_Virtual_Guest_Block_Device_Template_Group::editObject)","code":"SoftLayer_Exception"}
Does anyone have a proper example of using editObject for SoftLayer_Virtual_Guest_Block_Device_Template_Group using JSON not XMLRPC like the slcli does?
Preferred if some has a curl example which updates attributes on SoftLayer_Virtual_Guest_Block_Device_Template_Group object would be awesome.

To update the datacenters of an image template you have to use the method “setAvailableLocations” of the service SoftLayer_Virtual_Guest_Block_Device_Template_Group.
The method editObject just edit an image template group's name and note.
You can use this curl example to update the datacenters of an imagen template:
curl -d #post.json -k https://[username]:[apiKey]#api.softlayer.com/rest/v3/SoftLayer_Virtual_Guest_Block_Device_Template_Group/1722867/setAvailableLocations.json | python -mjson.tool
You have to add in this curl example the json file “#post.json”, where is the image template body.
The json file must contain the following data:
{
"parameters": [
[
{
"id": 265592
},
{
"id": 814994
},
{
"id": 138124
},
{
"id": 154820
},
{
"id": 449600
}
]
]
}
To get the datacenters ids you can use this curl command example:
curl -k "https://[username]:[apiKey]#api.softlayer.com/rest/v3/SoftLayer_Location/getDatacenters.json" | python -mjson.tool

Related

Unable to return any data in AppSync console with search - using #searchable directive in Amplify

I've added a #searchable directive to my Amplify/GraphQL schema as follows:
type Card
#model
#searchable
{
name: String
id: ID!
}
I've added some items, which I can retrieve with listCards in my AppSync Console:
query MyQuery {
listCards {
items {
name
}
}
}
# Returns:
{
"data": {
"listCards": {
"items": [
{
"name": "hunter"
},
{
"name": "url1"
},
{
"name": "testThur"
},
{
"name": "testThur2"
},
...
}
Now, when I try to use searchCards I can't get it to return anything:
query MyQuery {
searchCards(filter: {name: {ne: "nonsense"}}) {
nextToken
total
items {
name
}
}
}
# Returns:
{
"data": {
"searchCards": {
"nextToken": null,
"total": null,
"items": []
}
}
}
How do I get this working?
I noticed that new cards that I add are returned, but ones that were added before adding the #searchable directive don't get returned.
There's a grey info paragraph in the docs https://docs.amplify.aws/cli/graphql/search-and-result-aggregations/:
Once the #searchable directive is added, all new records added to the model are streamed to OpenSearch. To backfill existing data, see Backfill OpenSearch index from DynamoDB table.
It looks like any previous items that I've created on the database won't be streamed to OpenSearch, and therefore won't be returned by 'search' AppSync calls.
We're directed here: https://docs.amplify.aws/cli/graphql/troubleshooting/#backfill-opensearch-index-from-dynamodb-table
We are instructed to use the provided python file with this command:
python3 ddb_to_es.py \
--rn 'us-west-2' \ # Use the region in which your table and OpenSearch domain reside
--tn 'Post-XXXX-dev' \ # Table name
--lf 'arn:aws:lambda:us-west-2:<...>:function:amplify-<...>-OpenSearchStreamingLambd-<...>' \ # Lambda function ARN, find the DynamoDB to OpenSearch streaming functions, copy entire ARN
--esarn 'arn:aws:dynamodb:us-west-2:<...>:table/Post-<...>/stream/2019-20-03T00:00:00.350' # Event source ARN, copy the full DynamoDB table ARN
(I've tried this with my region, ARN's, and DynamoDB references but when I hit enter in my CLI it just goes to the next command line and nothing happens? I've not used python before. Hopefully someone here has more luck?)
You should run the script like this :
python3 fileNameToYourScript.py --rn <region> --tn <fullTableName> --lf <arnToYourOpenSearchLambdaFunction> --esarn <arnToYourTableName>
Remove the angle brackets and replace them with the actual value no quotation marks...
Another thing, I kept getting an error that credentials couldn't be found, in case you also get it, I fixed it by going to .aws/credentials and duplicating my profile details but naming the copy [default]. Also did the same in the .aws/config, duplication my region details and naming the copy [default].

how can I compose a bot to iterate trough a xml json object?

I am using the composer to publish a bot to fetch data from an azure storage table.
In short, the bot composer needs to construct a bot to iterate through an XML deserialized JSON object returned by the azure storage rest API.
In my code generated by the composer, the bot does a "set property" step immediately following the successful return of the REST API (storage table query). Given the deserialized object returned by the storage REST API, how should the "set property" statement be constructed so the bot can print our the individual data field,
Another way to phrase the question: how can I use the composer to construct the bot to iterate through a returned deserialized object (coded in XML JSON format)?
Where can I find a document that can shed some light on this matter?
Is there any place I can find a good example? Can it be done via composer?
Thanks in advance.
Yes, it can be done. If the API returns XML, make sure you configure your api call to ask for content type application/xml.
Then you can use use the xPath built in function. Make note that it will return an array if results in more than value matches the expression, in which you can use the foreach function to iterate over it with. I needed to run the nightly build of Composer (with bot-builder 4.12.0) to get it to work for me. See here for some more info:
https://github.com/microsoft/botbuilder-js/pull/3093
Here's an example that worked for me:
"actions": [
{
"$kind": "Microsoft.SendActivity",
"$designer": {
"id": "rGv7XC"
},
"activity": "${SendActivity_rGv7XC()}"
},
{
"$kind": "Microsoft.HttpRequest",
"$designer": {
"id": "TDA1wO"
},
"method": "GET",
"url": "http://www.geoplugin.net/xml.gp?ip=157.54.54.128",
"resultProperty": "dialog.api_response",
"contentType": "application/xml"
},
{
"$kind": "Microsoft.SetProperty",
"$designer": {
"id": "ipNhfY"
},
"property": "dialog.timezone",
"value": "=xPath(dialog.api_response.content,'/geoPlugin/geoplugin_timezone/text()')"
},
{
"$kind": "Microsoft.SendActivity",
"$designer": {
"id": "DxohEx"
},
"activity": "${SendActivity_DxohEx()}"
}
]
You can (if needed/you wish) use the json and jPath built in functions to convert xml to json and then query with. Something like:
${json(user.testXml)} and then
${jPath(user.testJson , "automobiles")}

How to filter unique values with jq?

I'm using the gcloud describe command to get metadata information about instances.What's the best way to filter the json response with jq to get the name of the instance - if it contains "kafka" as a key.
.name + " " + .metadata.items[]?.key | select(contains("kafka"))'
Basically if items contains kafka print name.This is just a small excerpt from the json file.
"metadata": {
"fingerprint": "xxxxx=",
"items": [
{
"key": "kafka",
"value": "xxx="
},
{
"key": "some_key",
"value": "vars"
}
],
"kind": "compute#metadata"
},
"name": "instance-name",
"networkInterfaces": [
{
"accessConfigs": [
{
"kind": "compute#accessConfig",
"name": "External NAT",
"natIP": "ip",
"type": "ONE_TO_ONE_NAT"
}
],
"kind": "compute#networkInterface",
"name": "",
"network": xxxxx
}
],
I'm sure this is possible with jq, but in general working with gcloud lists is going to be easier using the built-in formatting and filtering:
$ gcloud compute instances list \
--filter 'metadata.items.key:kafka' \
--format 'value(name)'
--filter tells you which items to pick; in this case, it grabs the instance metadata, looks at the items, and checks the keys for those containing kafka (use = instead to look for keys that are exactly kafka).
--format tells you to grab just one value() (as opposed to a table, JSON, YAML) from each matching item; that item will be the name of the instance.
You can learn more by running gcloud topic filters, gcloud topic formats, and gcloud topic projections.
Here is a simple jq solution using if and any:
if .metadata.items | any(.key == "kafka") then . else empty end
| .name

ElasticSearch MapperParsingException object mapping

I folow a article about ElasticSearch and I try put this example on my engine.
example:
curl -XPUT 'elasticsearch:9200/twitter/tweet/1' -d '{
"user": "david",
"message": "C'est mon premier message de la journée !",
"postDate": "2010-03-15T15:23:56",
"priority": 2,
"rank": 10.2
}'
I try to send this information across a bash file (I use Putty), but I have this errror:
{"error":"MapperParsingException[object mapping for [tweet] tried to parse as object,
but got EOF, has a concrete value been provided to it?]","status":400}
I also try to see one error with "cat -e tweet.sh", but I don't understand why I've got this error.
Thanks in advance.
It's a type mismatching. I'm facing with such issue too. It looks like you try to index a value into an object mapped json. i.e., you indexed one time something like this:
{
"obj1": {
"field1": "value1"
}
}
and then index this:
{
"obj1": "value"
}
Check your existing mapping via elasticsearch:9200/twitter/_mapping and you will see if that one of the field was indexed as object

Parse response from a "folder items" request to find a file

Using the v2 of the box api, I use the folder items request to get information on files in a folder: http://developers.box.com/docs/#folders-retrieve-a-folders-items
I'm looking at trying to parse the response data. Any ideas how I can do this in bash to easily find a file in the user's account? I would like to find the name of the file where I can get the ID of the file as well.
response looks something like this:
{
"total_count": 25,
"entries": [
{
"type": "file",
"id": "531117507",
"sequence_id": "0",
"etag": "53a93ebcbbe5686415835a1e4f4fff5efea039dc",
"name": "agile-web-development-with-rails_b10_0.pdf"
},
{
"type": "file",
"id": "1625774972",
"sequence_id": "0",
"etag": "32dd8433249b1a59019c465f61aa017f35ec9654",
"name": "Continuous Delivery.pdf"
},
{ ...
For bash, you can use sed or awk. Look at Parsing JSON with Unix tools.
Also if you can use a programming language, then python can be your fastest option. it has a nice module json http://docs.python.org/library/json.html. It has a simple decode API which will give a dict as the output
Then
import json
response_dict = json.loads(your_response)
I recommend using jq for parsing/munging json in bash. It is WAY better than trying to use sed or awk to parse it.

Resources