CloudFormation - AWS - Serverless - yaml

I want my output to be as below:-
{
"EVENT": "start",
"NAME": "schedule-rule-start",
"ECS_CLUSTER": "gps-aws-infra-ecs-cluster",
"ECS_SERVICE_NAME": [
“abc”,
"def"
],
"DESIRED_COUNT": "1",
"REGION": “eu-central-1"
}
My Input in yaml is as below:
Input: !Sub '{"EVENT": "${self:custom.input.StartECSServiceRule.Event}", "NAME": "${self:custom.input.StartECSServiceRule.Name}", "ECS_CLUSTER": "${self:custom.input.StartECSServiceRule.ClusterName}", "ECS_SERVICE_NAME": "${self:custom.input.StartECSServiceRule.ECSServiceName}","DESIRED_COUNT": "${self:custom.input.StartECSServiceRule.DesiredCount}","REGION": "${self:provider.region}"}'
I want the generated cloudformation json as below:-
“{\“EVENT\“:\“start\“,\“NAME\“:\“schedule-rule-start\“,\“ECS_CLUSTER\“:\“gps-aws-infra-ecs-cluster\“,\“ECS_SERVICE_NAME\“:["abc","def"],\“DESIRED_COUNT\“:\“1\“,\“REGION\“:\“eu-central-1\“}”
Need help on how should i provide the value of "ECSServiceName" as below doesnot work in the Input I gave in the yaml file???
"${self:custom.input.StartECSServiceRule.ECSServiceName}
Please help to correct my yaml file Input. Right now it comes as string , i want it to come as array.

Related

How to filter unique values with jq?

I'm using the gcloud describe command to get metadata information about instances.What's the best way to filter the json response with jq to get the name of the instance - if it contains "kafka" as a key.
.name + " " + .metadata.items[]?.key | select(contains("kafka"))'
Basically if items contains kafka print name.This is just a small excerpt from the json file.
"metadata": {
"fingerprint": "xxxxx=",
"items": [
{
"key": "kafka",
"value": "xxx="
},
{
"key": "some_key",
"value": "vars"
}
],
"kind": "compute#metadata"
},
"name": "instance-name",
"networkInterfaces": [
{
"accessConfigs": [
{
"kind": "compute#accessConfig",
"name": "External NAT",
"natIP": "ip",
"type": "ONE_TO_ONE_NAT"
}
],
"kind": "compute#networkInterface",
"name": "",
"network": xxxxx
}
],
I'm sure this is possible with jq, but in general working with gcloud lists is going to be easier using the built-in formatting and filtering:
$ gcloud compute instances list \
--filter 'metadata.items.key:kafka' \
--format 'value(name)'
--filter tells you which items to pick; in this case, it grabs the instance metadata, looks at the items, and checks the keys for those containing kafka (use = instead to look for keys that are exactly kafka).
--format tells you to grab just one value() (as opposed to a table, JSON, YAML) from each matching item; that item will be the name of the instance.
You can learn more by running gcloud topic filters, gcloud topic formats, and gcloud topic projections.
Here is a simple jq solution using if and any:
if .metadata.items | any(.key == "kafka") then . else empty end
| .name

How to reference the Amazon Data Pipeline name?

Is it possible to use the name of an Amazon Data Pipeline as a variable inside the Data Pipeline itself? If yes, how can you do that?
Unfortunately, you can't refer to the name. You can refer to the pipeline ID using the expression #pipelineId. For example, you could define a ShellCommandActivity to print the pipeline ID:
{
"id": "ExampleActivity",
"name": "ExampleActivity",
"runsOn": { "ref": "SomeEc2Resource" },
"type": "ShellCommandActivity",
"command": "echo \"Hello from #{#pipelineId}\""
}

kinesis agent to lambda, how to get origin file and server

I have a kinesis agent that streams a lot of log files information to kinesis streams and I have a Lambda function that parses the info.
On Lambda in addition to the string I need to know the source file name an machine name is it possible?
You can add it to the data that you send to Kinesis.
Lambda gets Kinesis records as base64 string, you can encode to this string a JSON of this form:
{
"machine": [machine],
"data": [original data]
}
And then, when processing the records on Lambda: (nodejs):
let record_object = JSON.parse(new Buffer(event.Records[0].kinesis.data, 'base64').toString('utf8'));
let machine = record_object.machine;
let data = record_object.data;
Assuming you are using Kinesis Agent to produce data stream. I see that the opensource community has added ADDEC2METADATA as a preprocessing option in the agent. The source code
Make sure that the source content file is of JSON format. If the original format is CSV then use the CSVTOJSON transformer first to convert it to JSON and then pipe it to ADDEC2METADATA transformer as shown below.
Open agent.json and add the following:
"flows": [
{
"filePattern": "/tmp/app.log*",
"kinesisStream": "my-stream",
"dataProcessingOptions": [
{
"optionName": "CSVTOJSON",
"customFieldNames": ["your", "custom", "field", "names","here", "if","origin","file","is","csv"],
"delimiter": ","
},
{
"optionName": "ADDEC2METADATA",
"logFormat": "RFC3339SYSLOG"
}
]
}
]
}
If your code is running out of a container/ECS/EKS etc. where the originating info is not as simple as collecting info about bare-metal EC2, then use "ADDMETADATA" declarative as shown below in the agent.log file:
{
"optionName": "ADDMETADATA",
"timestamp": "true/false",
"metadata": {
"key": "value",
"foo": {
"bar": "baz"
}
}
}

How do I access data from a JSON file in Ruby tests?

I have a JSON file:
{
"user": "John",
"first": "John",
"last": "Wilson",
"updated": "2013-02-17",
"generated_at": "2013-02-13",
"version": 1.1,
}
I want to use this as the data file for my Ruby test and want to access the data in this file. I am doing the data verification as:
application[data_first]. should eq 'John'
I want to refer to the expected data from the JSON file using something like:
application[data_first]. should eq JSON_file[first]
Assuming the JSON file's name is "JSON_file", I also have added require 'JSON_file' at the top of my test script.
How do I access the data from JSON file?

Parse response from a "folder items" request to find a file

Using the v2 of the box api, I use the folder items request to get information on files in a folder: http://developers.box.com/docs/#folders-retrieve-a-folders-items
I'm looking at trying to parse the response data. Any ideas how I can do this in bash to easily find a file in the user's account? I would like to find the name of the file where I can get the ID of the file as well.
response looks something like this:
{
"total_count": 25,
"entries": [
{
"type": "file",
"id": "531117507",
"sequence_id": "0",
"etag": "53a93ebcbbe5686415835a1e4f4fff5efea039dc",
"name": "agile-web-development-with-rails_b10_0.pdf"
},
{
"type": "file",
"id": "1625774972",
"sequence_id": "0",
"etag": "32dd8433249b1a59019c465f61aa017f35ec9654",
"name": "Continuous Delivery.pdf"
},
{ ...
For bash, you can use sed or awk. Look at Parsing JSON with Unix tools.
Also if you can use a programming language, then python can be your fastest option. it has a nice module json http://docs.python.org/library/json.html. It has a simple decode API which will give a dict as the output
Then
import json
response_dict = json.loads(your_response)
I recommend using jq for parsing/munging json in bash. It is WAY better than trying to use sed or awk to parse it.

Resources