Terraform - Place variable inside EOF tag - aws-lambda

I have a terraform file that I'm reusing to create several AWS Eventbridge (as triggers for some lambdas).
On a different part of the file I'm able to use For Each method to create several eventbridge and naming them accordingly. My problem is that I'm not being able to do the same thing inside EOF tag (that need to be different on each Eventbridge) since it takes everything as a string.
I would need to replace the ARN in "prefix": "arn:aws:medialive:us-west-2:11111111111:channel:3434343" with a variable. How could I do that?
This is the EOF part of the terraform code:
event_pattern = <<EOF
{
"source": ["aws.medialive"],
"detail-type": ["AWS API Call via CloudTrail"],
"detail": {
"eventSource": ["medialive.amazonaws.com"],
"eventName": ["StopChannel"],
"responseElements": {
"arn": [{
"prefix": "arn:aws:medialive:us-west-2:11111111111:channel:3434343"
}]
}
}
}
EOF
}

It's called a Heredoc String, not an EOF tag. "EOF" just happens to be the string you are using to tag the beginning and ending of a multi-line string. You could use anything there that doesn't occurr in your actual multiline string. You could be replacing "EOF" with "MYMULTILINESTRING".
To place the value of a variable in a Heredoc String in Terraform, you do the exact same thing you would do with other strings in Terraform: You use String Interpolation.
event_pattern = <<EOF
{
"source": ["aws.medialive"],
"detail-type": ["AWS API Call via CloudTrail"],
"detail": {
"eventSource": ["medialive.amazonaws.com"],
"eventName": ["StopChannel"],
"responseElements": {
"arn": [{
"prefix": "${var.my_arn_variable}"
}]
}
}
}
EOF
}

Related

Issues getting flow to send the correct json in body when using powerautomate's http request

I'm using a PowerAutomate Flow to call a native SmartSheet API that does a POST. The POST IS working but my MULTI_PICKLIST type field is not being populated correctly in SmartSheet due to the double quotes.
The API is: concat('https://api.smartsheet.com/2.0/sheets/', variables('vSheetID'), '/rows')
In the Body section of the http rest API I form my JSON and the section of interest looks like this:
{
"columnId": 6945615984781188,
"objectValue": {
"objectType": "MULTI_PICKLIST",
"values": [
#{variables('vServices')}
]
}
}
My variable vServices raw output looks like:
{
"body":
{
"name": "vServices",
"value": "Test1, Test2"
}
}
The format needs to be like this (it works using PostMan).
{
"columnId": 6945615984781188,
"objectValue": {
"objectType": "MULTI_PICKLIST",
"values": [
"Test1","Test2"
]
}
}
As a step in formatting my vServices variable I tried to do a replace function to replace the ',' with a '","' but this ultimately ends up as a \",\"
Any suggestion on how to get around this? Again, ultimately I need the desired JSON Body to read but haven't been able to achieve this in the Body section:
{
"columnId": 6945615984781188,
"objectValue": {
"objectType": "MULTI_PICKLIST",
"values": [
"Test1","Test2"
]
}
}
vs this (when using replace function).
{
"columnId": 6945615984781188,
"objectValue": {
"objectType": "MULTI_PICKLIST",
"values": [
"Test1\",\"Test2"
]
}
}
Thank you in advance,
I resolved my issue by taking the original variable, sending it to a compose step that did a split on the separator of a comma. I then added a step to set a new variable to the output of the compose step. This left me with a perfectly setup array in the exact format I needed! This seemed to resolve any of the issues I was having with double quotes and escape sequences.

How to use carriage return in a script template with a runtime mapping field?

Here is an example that illustrates the problem we are having with "mustache" and the carriage return.
In our script template, we need :
a runtime mapping field : to compute a result (with a big script in our real case)
conditional template : to build search criteria according to params existence (many criteria in our real case)
We use Elasticsearch 7.16 and kibana debug console to make our tests.
We create this script template with this request :
POST _scripts/test
{
"script": {
"lang": "mustache",
"source": """{
"runtime_mappings": {
"result": {
"type": "long",
"script": {
"source": "emit({{#foo}}{{foo}}{{/foo}}{{^foo}}0{{/foo}})"
}
}
}
{{#foo}}
,"fields": [
"result"
]
{{/foo}}
}"""
}
}
Here are 2 examples of requests that show how this script works:
Request 1 : Search request with param
Return the computed field "result" with the "foo" parameter value (12345)
GET _search/template
{
"id": "test",
"params": {
"foo": 12345
}
}
Request 2 : Search request without param
Don't return computed field "result".
GET _search/template
{
"id": "test"
}
Like i said before, in our real case we have a very big "painless" script in the computed field.
For more readability, we therefore wrote this script on several lines and that's when a problem appears.
An error happened when we declare:
"source": "
emit({{#foo}}{{foo}}{{/foo}}{{^foo}}0{{/foo}})
"
instead of:
"source": "emit({{#foo}}{{foo}}{{/foo}}{{^foo}}0{{/foo}})"
Due to the JSON specifications, we cannot use carriage returns otherwise we get the following error:
Illegal unquoted character ((CTRL-CHAR, code 10)): has to be escaped using backslash to be included in string value
We also cannot use the notation with """ because it will conflict with the one used to declare the source of the script template.
Is there a trick to set the computed field script to multiple lines in Kibana debug console ?

How to prevent Aws Step Function from inserting the parent key "Input" when it invokes a Lambda?

How can I define an AWS Step Function state that passes precisely the same hash into an invoked Lambda that I supplied to the Step Function (e.g., without pushing the input hash down one level under a new key "Input")?
My ruby AWS Lambda Function assumes the incoming event hash looks like:
{
"queryStringParameters": {
"foo": "bar"
}
}
When I perform a test execution on an AWS Step Function, which invokes that lambda, and supply that same hash shown above, the event hash that gets passed into the lambda is not the same as the hash I provided to the Step Function... it has an extra parent key called "Input":
{
"Input":
{
"queryStringParameters": {
"foo": "bar"
}
}
}
In the Step Function, the state which invokes the lambda is defined by:
"invoke foobar": {
"Type": "Task",
"Resource": "arn:aws:states:::lambda:invoke",
"Parameters": {
"FunctionName": "arn:aws:lambda:xxxx:xxxx:function:xxxx:$LATEST",
"Payload": {
"Input.$": "$"
}
},
"Next": "Done",
"TimeoutSeconds": 10
},
Or will a Step Function always take its input and put it "under" a key called "Input"?
And if that is the case that an "Input" key is always added to the event hash passed to a Lambda function, how does one write a Lambda so it can be invoked from both a Step Function (which assumes a root key of "Input") and an API Gateway (which uses a different root key "queryStringParameters")?
Instead of this:
"Payload": {
"Input.$": "$"
}
you should do this:
"Payload.$": "$"
That will pass in the input directly to the Payload of the lambda function.

AWS Stepfunction pass data to next lambda without all the extra padding

I have created a state machine with AWD CDK (typescript) and it all works fine. It is just the output of Lambda 1 which is the input for Lambda 2, has some sort of state machine padding which I am not interested in.
Definition of state machine:
{
"StartAt": "",
"States": {
"...applicationPdf": {
"Next": "...setApplicationProcessed",
"Type": "Task",
"Resource": "arn:aws:states:::lambda:invoke",
"Parameters": {
"FunctionName": "...applicationPdf",
"Payload.$": "$"
}
},
"...setApplicationProcessed": {
"Next": "Done",
"Type": "Task",
"Resource": "arn:aws:states:::lambda:invoke",
"Parameters": {
"...applicationPdf",
"Payload.$": "$"
}
},
"Done": {
"Type": "Succeed"
}
}
}
Output of Lambda1 (applicationPdf):
{
"ExecutedVersion": "$LATEST",
"Payload": {
...
},
"SdkHttpMetadata": {
"AllHttpHeaders": {
...
},
"HttpHeaders": {
....
},
"HttpStatusCode": 200
},
"SdkResponseMetadata": {
....
},
"StatusCode": 200
}
So I am only interested in Payload, not all the other stuff.
The reason I want to do is that is I want to run the 2nd lambda separately I just want the Event going into the Lambda, to be the Payload object, not the the object with ExecutedVersion etc.
Does anyone know how to do this?
I will have a look at the Parameters option of the definition, maybe the answer lies there.
Thanks for your question and for your interest in Step Functions.
The ResultSelector and OutputPath fields can be used to manipulate the output of a state, which can be particularly helpful when a state outputs values which you do not need access to in subsequent states. The difference between them is that ResultSelector is applied before the state's ResultPath is applied, while OutputPath is applied after it.
As you noted, you can use OutputPath to filter out any unwanted metadata before being passed on to the next state.
I found one solution, add the outputPath:
return new LambdaInvoke(this, 'lamba', {
lambdaFunction: Function.fromFunctionArn(this, name, this.createLabmdaArn('applicationPdf')),
outputPath: '$.Payload',
});
This seems to work and might be THE solution.

How to reference the Amazon Data Pipeline name?

Is it possible to use the name of an Amazon Data Pipeline as a variable inside the Data Pipeline itself? If yes, how can you do that?
Unfortunately, you can't refer to the name. You can refer to the pipeline ID using the expression #pipelineId. For example, you could define a ShellCommandActivity to print the pipeline ID:
{
"id": "ExampleActivity",
"name": "ExampleActivity",
"runsOn": { "ref": "SomeEc2Resource" },
"type": "ShellCommandActivity",
"command": "echo \"Hello from #{#pipelineId}\""
}

Resources