Resource based AWS lambda:Permission multiple principals in YAML? - aws-lambda

Working on a cloud formation project, and have a resource based policy attached to my lambda; it's something similar to the following in YAML:
Mapping:
AccMap:
Alpha:
AWSAcc: 1234567 # aws account numbers
Beta:
AWSAcc: 2345678
Prod:
AWSAcc: 3456789
PermissionPolicy:
Type: AWS::Lambda::Permission
Properties:
Resource: !Ref LambdaNameHere
Principal:
Fn::FindInMap:
- AccMap
- !Ref Stage # defined elsewhere
- AWSAcc
I want to grant multiple accounts this permission, for example, multiple accounts in Beta. How would I go about it in YAML? Can I just make AWSAcc an array, like this?
Mapping:
Beta:
AWSAcc:
- 1234567
- 2345678

You can't pass a list. A PermissionsPolicy resource's Principal property accepts a single string value:
{
"Type" : "AWS::Lambda::Permission",
"Properties" : {
"Action" : String,
"EventSourceToken" : String,
"FunctionName" : String,
"FunctionUrlAuthType" : String,
"Principal" : String,
"PrincipalOrgID" : String,
"SourceAccount" : String,
"SourceArn" : String
}
}
As a workaround, add multiple Permission resources and look up the principal for each. !FindInMap returns the right AWSAcc list for the stage. !Select picks the right principal element from the list:
Principal: !Select [ "0", !FindInMap [ AccMap, !Ref Stage, AWSAcc ] ]
Change "0" to "1" for the second principal's Permission, and so on. Note that each stage must have an equal number of principals, or you will get an out of bounds error.
Edit: If the stages have an unequal number of principals, define a Condition and apply it to the "extra" Permission resources.

Related

AWS SAM - Apply policy template for a resource created conditionally

I create a DynamoDb table conditionally:
MyDynamoTable:
Type: AWS::DynamoDB::Table
Condition: IsDevAccount
and this is how IsDevAccount is defined using an input parameter:
Conditions:
IsDevAccount: !Equals [ !Ref Stage, dev ]
Now I'm creating a Lambda function that accepts the table's name (amongst other things) as input through environment variables. This is done conditionally, too. Within the function's code, I'd check if the table name is passed (pass empty if condition isn't met). If so, I'd put some items into it.
However, I'm not sure how to apply policy templates to the function's role conditionally. Normally I do it like this:
MyFunction:
Type: AWS::Serverless::Function
Properties:
Policies:
- DynamoDBWritePolicy:
TableName: !Ref MyDynamoTable
What happens to the function's execution role if the table isn't created because the condition isn't met (e.g.: in another account)? Can I apply this policy template conditionally, as well?
What I don't want to do is to blindly give write permission to all DynamoDB tables within the account.
Yes, you could add the condition to the DB write policy so that only when the condition is met, it will allow the write policy.
You're creating the table only if the environment is staging or development, you could apply a condition on the policy to check for your table name then apply the write policy. Example below
MyDynamoTable:
Type: AWS::DynamoDB::Table
Condition: IsDevAccount
Conditions:
IsDevAccount: !Equals [ !Ref Stage, dev ]
MyFunction:
Type: AWS::Serverless::Function
Properties:
Policies:
- DynamoDBWritePolicy:
Condition: !Equals [ !Ref MyDynamoTable, "myTableName" ],
TableName: !Ref MyDynamoTable
Update in response to comments:
!Ref returns the value of the specified parameter or resource. We need parameters with allowed values for the environment and DBtable for the condition.
Parameters:
Environment:
Type: String
Default: dev
AllowedValues:
- dev
- stage
- prod
MyDynamoTable:
Description: table name for the db
Type: String
AllowedValues:
- tableOne
- tableTwo
- myTableName
Conditions:
IsDevAccount: !Equals [ !Ref Environment, "dev" ]
TableExists: !Equals [ !Ref MyDynamoTable, "myTableName" ]
MyFunction:
Type: AWS::Serverless::Function
Properties:
Policies:
- DynamoDBWritePolicy:
Condition: !And [IsDevAccount, TableExists] // Only with TableExists condition, it'll work fine with the added parameters
TableName: !Ref MyDynamoTable
Ref:- https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/parameters-section-structure.html
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/conditions-section-structure.html
Update 2:
Agreed, I researched and confirmed, there is no way to check for resources created in the same stack template (That's why suggested parameter). Conditions are all parameter based.
However if the resource was created already in other stack, you could do this through Resource import. I don't think, resource import will be of help in your requirement.
However, a workaround would be to have Boolean parameters for TableExists condition and can pass the value through AWS CLI on the run like below,
MyDynamoTable:
Description: dynamo db table
Type: String
AllowedValues:
- true
- false
Conditions:
TableExists: !Equals [ !Ref MyDynamoTable, "true" ]
MyFunction:
Type: AWS::Serverless::Function
Properties:
Policies:
- DynamoDBWritePolicy:
Condition: !Ref TableExists
TableName: !Ref MyDynamoTable
AWS CLI on deploy pass required parameters
aws cloudformation deploy --template templateName.yml --parameter-overrides MyDynamoTable="true" dynamoDBtableName="myTableName" (any parameter required)

How do I access the Cognito UserPoolClient Secret in Lambda function?

I have created Cognito UserPool and UserpoolClient via Resources in serverless.yml file like this -
CognitoUserPool:
Type: AWS::Cognito::UserPool
Properties:
AccountRecoverySetting:
RecoveryMechanisms:
- Name: verified_email
Priority: 2
UserPoolName: ${self:provider.stage}-user-pool
UsernameAttributes:
- email
MfaConfiguration: OFF
Policies:
PasswordPolicy:
MinimumLength: 8
RequireLowercase: True
RequireNumbers: True
RequireSymbols: True
RequireUppercase: True
CognitoUserPoolClient:
Type: AWS::Cognito::UserPoolClient
Properties:
ClientName: ${self:provider.stage}-user-pool-client
UserPoolId:
Ref: CognitoUserPool
ExplicitAuthFlows:
- ALLOW_USER_PASSWORD_AUTH
- ALLOW_REFRESH_TOKEN_AUTH
GenerateSecret: true
Now I can pass the Userpool and UserpoolClient as environment variables to the lambda functions like this -
my_function:
package: {}
handler:
events:
- http:
path:<path>
method: post
cors: true
environment:
USER_POOL_ID: !Ref CognitoUserPool
USER_POOL_CLIENT_ID: !Ref CognitoUserPoolClient
I can access these IDs in my code as -
USER_POOL_ID = os.environ['USER_POOL_ID']
USER_POOL_CLIENT_ID = os.environ['USER_POOL_CLIENT_ID']
I have printed the values and they are being printed correctly. However, UserpoolClient also generates one AppClient secret which I need to use while generating secret hash. How shall I access app client secret (UserpoolClient's secret) in my lambda?
Probably now what you hoped for, but you cannot export client secret in CloudFormation explicitly. Take a look at the return values from AWS::Cognito::UserPoolClient. There you can only get the client ID.
What you could do is to create the client in another CF template and either create there a custom resource to read the secret and output it, or have an intermediate step where you get this value with CLI and then pass it into serverless.
There is currently no other option.

Hide or encrypt credentials information in AWS Data pipeline

I am creating an AWS data-pipeline to copy data from mysql to S3. I have written a shell script which accepts credentials as arguments and creates the pipeline so that my credentials are not exposed in script.
used below bash shell script to create pipeline.
unique_id="$(date +'%s')"
profile="${4}"
startDate="${1}"
echo "{\"values\":{\"myS3CopyStartDate\":\"$startDate\",\"myRdsUsername\":\"$2\",\"myRdsPassword\":\"$3\"}}" > mysqlToS3values.json
sqlpipelineId=`aws datapipeline create-pipeline --name mysqlToS3 --unique-id mysqlToS3_$unique_id --profile $profile --query '{ID:pipelineId}' --output text`
validationErrors=`aws datapipeline put-pipeline-definition --pipeline-id $sqlpipelineId --pipeline-definition file://mysqlToS3.json --parameter-objects file://mysqlToS3Parameters.json --parameter-values-uri file://mysqlToS3values.json --query 'validationErrors' --profile $profile`
aws datapipeline activate-pipeline --pipeline-id $sqlpipelineId --profile $profile
However when I fetch pipeline definition through aws cli using
aws datapipeline get-pipeline-definition --pipeline-id 27163782,
I get my credentials in plain text in json output.
{ "parameters": [...], "objects": [...], "values": { "myS3CopyStartDate": "2018-04-05T10:00:00", "myRdsPassword": "sbc", "myRdsUsername": "ksnck" } }
Is there any way to encrypt or hide the credentials information?
I don't think there is a way to mask the data in the pipeline definition.
The strategy I have used is to store my secrets in S3 (encrypted with a specific KMS key and using appropriate IAM/bucket permisions). Then, inside my datapipeline step, I use the AWS CLI to read the secret from S3 and pass it to the mysql command or whatever.
So instead of having a pipeline parameter like myRdsPassword I have:
"myRdsPasswordFile": "s3://mybucket/secrets/rdspassword"
Then inside my step I read it with something like:
PWD=$(aws s3 cp ${myRdsPasswordFile} -)
You could also have a similar workflow that retrieves the password from AWS Parameter Store instead of S3.
There is actually a way that's built into data pipelines:
You prepend the field with an * and it will encrypt the field and hide it visibly like a password form field.
If you're using parameters, then prepend the * on both the object field and the corresponding parameter field like so (note - there are three * with a parameterized setup; the example below is just a sample - missing required fields just to simplify and illustrate how to handle the encryption through parameters):
...{
"*password": "#{*myDbPassword}",
"name": "DBName",
"id": "DB",
},
],
"parameters": [
{
"id": "*myDbPassword",
"description": "Database password",
"type": "String"
}...
See more below:
https://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-pipeline-characters.html
You can store RDS Credentials in AWS Secret Manager. You can then retrieve the credentials from SecretManager in the data-pipeline using cloudformation template as described below:
Mappings:
RegionToDatabaseConfig:
us-west-2:
CredentialsSecretKey: us-west-2-SECRET_NAME
# ...
us-east-1:
CredentialsSecretKey: us-east-1-SECRET_NAME
# ...
eu-west-1:
CredentialsSecretKey: eu-west-1-SECRET_NAME
# ...
Resources:
OurProjectDataPipeline:
Type: AWS::DataPipeline::Pipeline
Properties:
# ...
PipelineObjects:
# ...
# RDS resources
- Id: PostgresqlDatabase
Name: Source database to sync data from
Fields:
- Key: type
StringValue: RdsDatabase
- Key: username
StringValue:
!Join
- ''
- - '{{resolve:secretsmanager:'
- !FindInMap
- RegionToDatabaseConfig
- {Ref: 'AWS::Region'}
- CredentialsSecretKey
- ':SecretString:username}}'
- Key: "*password"
StringValue:
!Join
- ''
- - '{{resolve:secretsmanager:'
- !FindInMap
- RegionToDatabaseConfig
- {Ref: 'AWS::Region'}
- CredentialsSecretKey
- ':SecretString:password}}'
- Key: jdbcProperties
StringValue: 'allowMultiQueries=true'
- Key: rdsInstanceId
StringValue:
!FindInMap
- RegionToDatabaseConfig
- {Ref: 'AWS::Region'}
- RDSInstanceId

What is `<<` and `&` in yaml mean?

When I review the cryptogen(a fabric command) config file . I saw there symbol.
Profiles:
SampleInsecureSolo:
Orderer:
<<: *OrdererDefaults ## what is the `<<`
Organizations:
- *ExampleCom ## what is the `*`
Consortiums:
SampleConsortium:
Organizations:
- *Org1ExampleCom
- *Org2ExampleCom
Above there a two symbol << and *.
Application: &ApplicationDefaults # what is the `&` mean
Organizations:
As you can see there is another symbol &.
I don't know what are there mean. I didn't get any information even by reviewing the source code (fabric/common/configtx/tool/configtxgen/main.go)
Well, those are elements of the YAML file format, which is used here to provide a configuration file for configtxgen. The "&" sign mean anchor and "*" reference to the anchor, this is basically used to avoid duplication, for example:
person: &person
name: "John Doe"
employee: &employee
<< : *person
salary : 5000
will reuse fields of person and has similar meaning as:
employee: &employee
name : "John Doe"
salary : 5000
another example is simply reusing value:
key1: &key some very common value
key2: *key
equivalent to:
key1: some very common value
key2: some very common value
Since abric/common/configtx/tool/configtxgen/main.go uses of the shelf YAML parser you won't find any reference to these symbols in configtxgen related code. I would suggest to read a bit more about YAML file format.
in yaml if data is like
user: &userId '123'
username: *userId
equivalent yml is
user: '123'
username: '123'
or
equivalent json will is
{
"user": "123",
"username": "123"
}
so it basically allows to reuse data, you can also try with array instead of single value like 123
try converting below yml to json using any yml to json online converter
users: &users
k1: v1
k2: v2
usernames: *users

Swagger yaml and json file for REST application developed using Play framework in Scala

I am trying to configure swagger for my application. Being new to this field I went to different tutorials and tried to convert the below json to YAML but it's giving errors like bad indentation, response missing etc. The main problem I am facing is in recognizing syntax to represent array of list in YAML format, then to add block in YAML which shows expected values for a particular block.
JSON Format to be converted to YAML:
{
"abc":[
{
"xyz":[ //array of list
{
"id":"",
"name":"",
"relation":[ //array of list
{
"first":{
"xxx":"",
"xxx":"",
"xxx":[ //array of string
""
]
},
"second":{
"xxx":"",
"xxx":"",
"xxx":[
""
],
"type":""
}
},
{
"first":{
"xxx":"",
"xxx":"",
"xxx":[ //array of string
""
]
},
"second":{
"xxx":"",
"xxx":"",
"xxx":[
""
],
"type":""
}
}
],
"rows":[
]
}
]
}
YAML is as below:
swagger: "2.0"
info:
version: 1.0.0
title: xxxx
description: xxxx
schemes:
- https
host: xxxx
basePath: xxxx
paths:
/xxx:
post:
summary: xxxx
consumes:
- application/json
produces:
- application/json
parameters:
abc:
- xyz:
id: string
name: string
relation: string
- first:
id: string
name: string
relation: string
second:
id: string
name: string
relation: string
- first:
id: string
name: string
relation: string
second:
id: string
name: string
relation: string
responses:
'200':
description: Created
When working with YAML you talk about sequences (besides map an scalar). A sequences is what gets mapped to a list in Python and an array in some other languages.
So if you are talking about "represent array of list in YAML" you are actually referring to a sequence of sequences. There are three ways to represent this in YAML.
block-style within block-style:
- - a
- b
- - c
- d
, flow-style within block-style:
- [a, b]
- [c, d]
, or flow-style within flow-style:
[[a, b,], [c, d,],]
Any online YAML parser will show you that the above amounts to the same.
Please note:
You cannot have block-style within flow-style
You can have trailing commas (something that JSON doesn't allow, and which makes JSON unnecessarily hard to edit for humans).
In your example YAML output, which is correct YAML, there are no sequence of sequences (or array of lists in your terminology).

Resources