Related
How can I use XPATH to return an array based on the existence of a specific property?
Below is a section of my JSON file. Under "root" there are a number of array objects and SOME of them contain the property "detection". I would like to retrieve the "service_name" of each array object ONLY IF the object array (under root) contains the property "detection".
e.g., "service_name": "IPS" should be returned
but for the example below, the service_name should NOT be returned because property "detection" is not present
Finally, is there a way to combine the above query into one, in order to return an array of values "service_name" and "detection" together, based on the same condition?
My current Power Automate Set Variable command is:
xpath(xml(variables('varProductsRoot')), '//detection | //service_name')
and unfortunately it returns ALL service_names, even if the component they belong to does not contain the "detection" property.
Below is a sample of my JSON file I am trying to parse
{
"root": {
"fg": [
{
"product_name": "fg",
"remediation": {
"type": "package",
"packages": [
{
"service": "ips",
"service_name": "IPS",
"description": "Detects and Blocks attack attempts",
"kill_chain": {
"step": "Exploitation"
},
"link": "https://fgd.fnet.com/updates",
"minimum_version": "22.414"
}
]
},
"detection": {
"attackid": [
51006,
50825
]
}
}
],
"fweb": [
{
"product_name": "fWeb",
"remediation": {
"type": "package",
"packages": [
{
"service": "waf",
"service_name": "Web App Security",
"description": "Detects and Blocks attack attempts",
"kill_chain": {
"step": "Exploitation"
},
"link": "https://fgd.fnet.com/updates",
"minimum_version": "0.00330"
}
]
},
"detection": {
"signature_id": [
"090490119",
"090490117"
]
}
}
],
"fcl": [
{
"product_name": "fcl",
"remediation": {
"type": "package",
"packages": [
{
"service": "vuln",
"service_name": "Vulnerability",
"description": "Detects and Blocks attack attempts",
"kill_chain": {
"step": "Delivery"
},
"link": "https://fgd.fnet.com/updates",
"minimum_version": "1.348"
}
]
},
"detection": {
"vulnid": [
69887,
2711
]
}
},
{
"product_name": "fcl",
"remediation": {
"type": "package",
"packages": [
{
"service": "ob-detect",
"service_name": "ob Detection",
"kill_chain": {
"step": "sm/SOAR"
},
"link": "https://www.fgd.com/services",
"minimum_version": "1.003"
}
]
}
}
],
"fss": [
{
"product_name": "fss",
"remediation": {
"type": "package",
"packages": [
{
"service": "ips",
"service_name": "IPS",
"description": "Detects and Blocks attack attempts",
"kill_chain": {
"step": "Exploitation"
},
"link": "https://fgd.fnet.com/updates",
"minimum_version": "22.414"
}
]
}
}
],
"fadc": [
{
"product_name": "fADC",
"remediation": {
"type": "package",
"packages": [
{
"service": "ips",
"service_name": "IPS",
"description": "Detects and Blocks attack attempts",
"kill_chain": {
"step": "Exploitation"
},
"link": "https://fgd.fnet.com/updates",
"minimum_version": "22.414"
}
]
},
"detection": {
"ips_rulename": [
"Error.Log.Remote.Code.Execution",
"Server.cgi-bin.Path.Traversal"
]
}
},
{
"product_name": "fADC",
"remediation": {
"type": "package",
"packages": [
{
"service": "waf",
"service_name": "Web App Security",
"description": "Detects and Blocks attack attempts",
"kill_chain": {
"step": "Exploitation"
},
"link": "https://fgd.fnet.com/updates",
"minimum_version": "1.00038"
}
]
},
"detection": {
"sigid": [
1002017267,
1002017273
]
}
}
],
"fsm": [
{
"product_name": "fsm",
"remediation": {
"type": "package",
"packages": [
{
"service": "ioc",
"service_name": "IOC",
"kill_chain": {
"step": "sm/SOAR"
},
"link": "https://www.fgd.com/services",
"minimum_version": "0.02355"
}
]
}
}
]
}
}
Thank you in advance,
Nikos
This will work for you. I've broken it up into three steps for ease ...
Step 1
This contains your JSON as you provided. The variable is defined as an Object.
Step 2
Initialise a string variable that contains the following expression ...
xml(variables('JSON'))
... which (as you know) will convert the JSON to XML.
Step 3
This is an Array variable that will extract the values of all service_name elements where the detection element exists, using the following expression ...
xpath(xml(variables('XML')), '//detection/..//service_name/text()')
Result
Voila! You have your values in an array.
How do I search for a person with BOTH given names I provide?
I have the following 2 patients who are "close". Everything (in the Human Name area) is the same except one of the GivenNames are the same.
Note "Apple" vs "Banana".
{
"resourceType": "Bundle",
"id": "269caf66-0ccc-43e7-b9a5-f16f84db0149",
"meta": {
"lastUpdated": "2019-11-20T19:30:26.858917+00:00"
},
"type": "searchset",
"link": [
{
"relation": "self",
"url": "https://localhost:44348/Patient?given=Jingerheimer"
}
],
"entry": [
{
"fullUrl": "https://localhost:44348/Patient/504f6bd3-e9b4-4846-8948-97bf09c70722",
"resource": {
"resourceType": "Patient",
"id": "504f6bd3-e9b4-4846-8948-97bf09c70722",
"meta": {
"versionId": "1",
"lastUpdated": "2019-11-20T19:26:11.005+00:00"
},
"identifier": [
{
"system": "ssn",
"value": "111-11-1111"
},
{
"system": "uuid",
"value": "da55d068e0784b359fa97498a11543c5"
}
],
"name": [
{
"family": "Smith",
"given": [
"John",
"Apple",
"Jingerheimer"
]
}
]
},
"search": {
"mode": "match"
}
},
{
"fullUrl": "https://localhost:44348/Patient/10054ce9-6141-4eca-bc5b-0978f8c8afcb",
"resource": {
"resourceType": "Patient",
"id": "10054ce9-6141-4eca-bc5b-0978f8c8afcb",
"meta": {
"versionId": "1",
"lastUpdated": "2019-11-20T19:26:48.962+00:00"
},
"identifier": [
{
"system": "ssn",
"value": "222-22-2222"
},
{
"system": "uuid",
"value": "52d09f9436d44591816fd229dd139523"
}
],
"name": [
{
"family": "Smith",
"given": [
"John",
"Banana",
"Jingerheimer"
]
}
]
},
"search": {
"mode": "match"
}
}
]
}
One has GivenNames that include "Apple". The other includes GivenNames that include "Banana".
This search works fine:
https://localhost:44348/Patient/?given=Jingerheimer
What I have tried is:
https://localhost:44348/Patient/?given=Jingerheimer&given=Apple
but that gives me no results.
Note, omitting "given=Jingerheimer" is not an option....that filters a bunch of others.
I'm trying to get
"Has BOTH of the given names I provide"
Your syntax is correct, so I think the server does not handle the search correctly. Can you check the self link for your second search to see if it reflects the search you performed? Does the result Bundle have an OperationOutcome detailing something went wrong? If all that seems okay, you'll need to check your server's code.
I am trying to build a cloudformation template but I have some trouble with how to connect my Oracle RDS instance with my two subnets.
My parameters are :
"3DCFDB": {
"Type": "AWS::RDS::DBInstance",
"Properties": {
"DBInstanceClass": "db.t2.micro",
"AllocatedStorage": "20",
"Engine": "oracle-se2",
"EngineVersion": "12.1.0.2.v13",
"MasterUsername": {
"Ref": "user"
},
"MasterUserPassword": {
"Ref": "password"
}
},
"Metadata": {
"AWS::CloudFormation::Designer": {
"id": "*"
}
},
"DependsOn": [
"3DEXPSUBPU",
"3DSUBPRI"
]
}
What parameter am I supposed to add to connect my RDS to 2 subnets?
If I understand correctly, you need to create a resource with Type "Type": AWS::RDS::DBSubnetGroup, then inside your "Type": "AWS::RDS::DBInstance" you can refer to the subnet group with something similar to this
"3DCFDB": {
"Type": "AWS::RDS::DBInstance",
"Properties": {
"DBInstanceClass": "db.t2.micro",
"AllocatedStorage": "20",
"Engine": "oracle-se2",
"EngineVersion": "12.1.0.2.v13",
"DBSubnetGroupName": {
"Ref": "DBsubnetGroup"
}
"MasterUsername": {
"Ref": "user"
},
"MasterUserPassword": {
"Ref": "password"
}
},
"Metadata": {
"AWS::CloudFormation::Designer": {
"id": "*"
}
},
"DependsOn": [
"3DEXPSUBPU",
"3DSUBPRI"
]
},
"DBsubnetGroup": {
"Type" : "AWS::RDS::DBSubnetGroup",
...
...
}
More info can be found here
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-rds-dbsubnet-group.html
Hi all I have the following CF that create a RMQ cluster using RMQ autoclustering. It works. However every time I run this all the instances end up in the same AZ! I've verified that the stack variables are correct. The subnets are set correctly as well. It's all create in the correct account. Not sure what else to try. Im wondering if something is incorrect in the VPC that is supplied to me?
{
"AWSTemplateFormatVersion": "2010-09-09",
"Parameters": {
"EnvironmentValue": {
"AllowedValues": [
"Dev",
"Test"
],
"Default": "Dev",
"Description": "What environment is this?",
"Type": "String"
},
"RabbitMQErlangCookie": {
"Description": "The erlang cookie to propagate to all nodes in the cluster",
"Type": "String",
"MinLength": "20",
"MaxLength": "20",
"Default": "TGFBTKPLRTOYFHNVSTWN",
"AllowedPattern": "^[A-Z]*$",
"NoEcho": true
},
"RabbitMQAdminUserID": {
"Description": "The admin user name to create on the RabbitMQ cluster",
"Type": "String",
"MinLength": "5",
"MaxLength": "20",
"Default": "admin",
"AllowedPattern": "[a-zA-Z0-9]*",
"NoEcho": true
},
"RabbitMQAdminPassword": {
"Description": "The admin password for the admin account",
"Type": "String",
"MinLength": "5",
"MaxLength": "20",
"Default": "xxxxxx",
"AllowedPattern": "[a-zA-Z0-9!]*",
"NoEcho": true
},
"InstanceAvailabilityZones" : {
"Description" : "A list of avilability zones in which instances will be launched. ",
"Type" : "CommaDelimitedList",
"Default" : "us-east-1e,us-east-1d"
},
"Environment": {
"Description": "The environment to confgiure (dev, test, stage, prod",
"Type": "String",
"AllowedValues": [
"d",
"t"
],
"Default": "d",
"NoEcho": false
}
},
"Mappings": {
"Environments" : {
"Dev": {
"VPCProtectedApp":"vpc-protected-app",
"VPCProtectedDb":"vpc-protected-db",
"VPCProtectedFe":"vpc-protected-fe",
"ELB": "App-Dev",
"SecurityGroup": "sg-soa-db",
"Identifier": "d",
"Prefix": "Dev",
"RMQELB": "elb-soa-db-rmq-dev",
"RMQELBTargetGroup": "elb-soarmq-target-group-dev",
"RMQSubnets": "subnet-soa-db-1,subnet-soa-db-2",
"RMQSecurityGroup":"sg-soa-db",
"RMQClusterMin": "3",
"RMQClusterMax": "3",
"ConsulELB": "elb-soa-db-cons-dev",
"ConsulSubnets": "subnet-soa-db-1,subnet-soa-db-2",
"ConsulSecurityGroup":"sg-soa-db-cons",
"ConsulClusterMin": "3",
"ConsulClusterMax": "3"
},
"Test": {
"VPCProtectedApp":"vpc-protected-app",
"VPCProtectedDb":"vpc-protected-db",
"VPCProtectedFe":"vpc-protected-fe",
"ELB": "App-Dev",
"SecurityGroup": "sg-soa-db",
"Identifier": "t",
"Prefix": "Test",
"RMQELB": "elb-soa-db-rmq-test",
"RMQELBTargetGroup": "elb-soarmq-target-group-test",
"RMQSubnets": "subnet-soa-db-1,subnet-soa-db-2",
"RMQSecurityGroup":"sg-soa-db",
"RMQClusterMin": "3",
"RMQClusterMax": "3",
"ConsulELB": "elb-soa-db-cons-test",
"ConsulSubnets": "subnet-soa-db-1,subnet-soa-db-2",
"ConsulSecurityGroup":"sg-soa-db-cons",
"ConsulClusterMin": "3",
"ConsulClusterMax": "3"
}
}
},
"Resources": {
"RabbitMQRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"ec2.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole"
]
}
]
},
"Path": "/",
"Policies": [
{
"PolicyName": "root",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:DescribeAutoScalingInstances",
"ec2:DescribeInstances"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"ecs:CreateCluster",
"ecs:DeregisterContainerInstance",
"ecs:DiscoverPollEndpoint",
"ecs:Poll",
"ecs:RegisterContainerInstance",
"ecs:Submit*"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "ec2:DescribeInstances",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"logs:*"
],
"Resource": [
"arn:aws:logs:*:*:*"
]
}
]
}
}
]
}
},
"RabbitMQInstanceProfile": {
"Type": "AWS::IAM::InstanceProfile",
"Properties": {
"Path": "/",
"Roles": [
{
"Ref": "RabbitMQRole"
}
]
}
},
"ELBSOARabbitMQ": {
"Type": "AWS::ElasticLoadBalancingV2::LoadBalancer",
"Properties": {
"Name": {"Fn::FindInMap" : ["Environments", {"Ref" : "EnvironmentValue" },"RMQELB"]},
"Scheme": "internet-facing",
"Subnets": [
{
"Fn::ImportValue" : "subnet-soa-db-1"
},
{
"Fn::ImportValue" : "subnet-soa-db-2"
}
],
"SecurityGroups": [
{
"Fn::ImportValue" : "sg-soa-db"
}
]
}
},
"ELBSOARMQListener": {
"Type": "AWS::ElasticLoadBalancingV2::Listener",
"Properties": {
"DefaultActions": [
{
"TargetGroupArn": {
"Ref": "ELBSOARMQTargetGroup"
},
"Type": "forward"
}
],
"LoadBalancerArn": {
"Ref": "ELBSOARabbitMQ"
},
"Port": 80,
"Protocol": "HTTP"
}
},
"ELBSOARMQListenerRule": {
"Type": "AWS::ElasticLoadBalancingV2::ListenerRule",
"Properties": {
"Actions": [
{
"TargetGroupArn": {
"Ref": "ELBSOARMQTargetGroup"
},
"Type": "forward"
}
],
"Conditions": [
{
"Field": "path-pattern",
"Values": [
"/"
]
}
],
"ListenerArn": {
"Ref": "ELBSOARMQListener"
},
"Priority": 1
}
},
"ELBSOARMQTargetGroup": {
"Type": "AWS::ElasticLoadBalancingV2::TargetGroup",
"Properties": {
"TargetType": "instance",
"HealthCheckIntervalSeconds": 30,
"HealthCheckPort": 15672,
"HealthCheckProtocol": "HTTP",
"HealthCheckTimeoutSeconds": 3,
"HealthyThresholdCount": 2,
"Name":{"Fn::FindInMap" : ["Environments", {"Ref" : "EnvironmentValue" },"RMQELBTargetGroup"]},
"Port": 15672,
"Protocol": "HTTP",
"UnhealthyThresholdCount": 2,
"VpcId": {
"Fn::ImportValue" : "vpc-protected-db"
}
}
},
"SOARMQServerGroup": {
"Type": "AWS::AutoScaling::AutoScalingGroup",
"DependsOn": "ELBSOARabbitMQ",
"Properties": {
"LaunchConfigurationName": {
"Ref": "SOARMQEc2InstanceLC"
},
"MinSize": "3",
"MaxSize": "5",
"TargetGroupARNs": [
{
"Ref": "ELBSOARMQTargetGroup"
}
],
"Tags": [
{
"ResourceType": "auto-scaling-group",
"ResourceId": "my-asg",
"InstanceName": "rabbitmq",
"PropagateAtLaunch": true,
"Value": "test",
"Key": "environment"
},
{
"ResourceType": "auto-scaling-group",
"ResourceId": "my-asg",
"InstanceName": "rabbitmq",
"PropagateAtLaunch": true,
"Value": "vavd-soa-rmq",
"Key": "Name"
}
],
"AvailabilityZones" : { "Ref" : "InstanceAvailabilityZones" },
"VPCZoneIdentifier": [
{
"Fn::ImportValue": "subnet-soa-db-1"
},
{
"Fn::ImportValue": "subnet-soa-db-2"
}
]
}
},
"SOARMQEc2InstanceLC": {
"Type": "AWS::AutoScaling::LaunchConfiguration",
"DependsOn": "ELBSOARabbitMQ",
"Properties": {
"IamInstanceProfile" : { "Ref" : "RabbitMQInstanceProfile" },
"ImageId": "ami-5e414e24",
"InstanceType": "m1.small",
"KeyName": "soa_dev_us_east_1",
"SecurityGroups": [
{
"Fn::ImportValue" : "sg-soa-db"
}
],
"UserData": {
"Fn::Base64": {
"Fn::Join": [
"",
[
"#!/bin/bash -xe\n",
"sudo su\n",
"exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1\n",
"echo \"1. Installing yum updates\"\n",
"sudo yum update -y\n",
"sudo yum install wget -y\n",
"sudo yum install socat -y\n",
"yum install -y aws-cfn-bootstrap\n",
"echo \"2. Downloading erlang distro and install\"\n",
"wget https://github.com/rabbitmq/erlang-rpm/releases/download/v20.3.0/erlang-20.3-1.el6.x86_64.rpm\n",
"sudo rpm -ivh erlang-20.3-1.el6.x86_64.rpm\n",
"export EC2_PUBLIC_IP=$(curl http://169.254.169.254/latest/meta-data/public-ipv4)\n",
"echo \"3. Downloading rabbitmq distro and installing\"\n",
"wget http://dl.bintray.com/rabbitmq/all/rabbitmq-server/3.7.4/rabbitmq-server-3.7.4-1.el6.noarch.rpm\n",
"sudo rpm -Uvh rabbitmq-server-3.7.4-1.el6.noarch.rpm\n",
"export RABBITMQ_USE_LONGNAME=true\n",
"echo \"4. Setting the erlang cookie for clustering\"\n",
"sudo sh -c \"echo ''",
{
"Ref": "RabbitMQErlangCookie"
},
"'' > /var/lib/rabbitmq/.erlang.cookie\"\n",
"sudo chown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie\n",
"sudo chmod 600 /var/lib/rabbitmq/.erlang.cookie\n",
"echo \"5. Writing the rabbitmq configurations for AWS Autocluster Group peer discovery\"\n",
"sudo cat << EOF > /etc/rabbitmq/rabbitmq.conf\n",
"cluster_formation.peer_discovery_backend = rabbit_peer_discovery_aws\n",
"cluster_formation.aws.region = us-east-1\n",
"cluster_formation.aws.use_autoscaling_group = true\n",
"log.console.level = debug\n",
"log.file.level = debug\n",
"EOF\n",
"echo \"6. Enable the management and peer discovery plugins\"\n",
"sudo rabbitmq-plugins enable rabbitmq_management\n",
"sudo rabbitmq-plugins --offline enable rabbitmq_peer_discovery_aws\n",
"echo \"7. Restart the service - stop the app prior to clustering\"\n",
"sudo service rabbitmq-server restart\n",
"sudo rabbitmqctl stop_app\n",
"sudo rabbitmqctl reset\n",
"echo \"8. Starting the application\"\n",
"sudo rabbitmqctl start_app\n",
"echo \"9. Adding admin user and setting permissions\"\n",
"sudo rabbitmqctl add_user ",
{
"Ref": "RabbitMQAdminUserID"
},
" ",
{
"Ref": "RabbitMQAdminPassword"
},
"\n",
"sudo rabbitmqctl set_user_tags ",
{
"Ref": "RabbitMQAdminUserID"
},
" administrator\n",
"sudo rabbitmqctl set_permissions -p / ",
{
"Ref": "RabbitMQAdminUserID"
},
" \".*\" \".*\" \".*\" \n",
"echo \"10. Configuration complete!\"\n"
]
]
}
}
}
}
}
}
I'm having issues getting my Lambda configured correctly to be able to run batch jobs. The code looks like this:
client = boto3.client('batch')
_job_queue = os.environ['JOB_QUEUE']
_job_definition = os.environ['JOB_DEFINITION']
_job_name = os.environ['START_JOB_NAME']
def lambda_handler(event, context):
return start_job()
def start_job():
response = client.list_jobs(jobQueue=_job_queue)
if _job_name in [job.jobName for job in response['jobSummaryList']]:
return 200
try:
client.submit_job(jobName=_job_name, jobQueue=_job_queue, jobDefinition=_job_definition)
return 201
except:
return 400
It's failing on client.list_jobs(jobQueue=_job_queue), with the following error:
"errorMessage": "An error occurred (AccessDeniedException) when
calling the ListJobs operation: User:
arn:aws:sts::749340585813:assumed-role/myproject/dev-StartJobLambda-HZO22Z5IMTFB
is not authorized to perform: batch:ListJobs on resource:
arn:aws:batch:us-west-2:749340585813:/v1/listjobs",
If I add my access keys to the lambda above, it works fine. I assume this is because I have administrator access, and authenticating as my user gives the lambda my privileges.
My lambda definition looks like:
"StartJobLambda": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Description": "Starts the My Project model training job.",
"Role": {
"Fn::GetAtt": [
"StartJobRole",
"Arn"
]
},
"Runtime": "python3.6",
"Handler": {
"Fn::Sub": "${StartJobModule}.lambda_handler"
},
"Tags": [
{
"Key": "environment",
"Value": {
"Ref": "Environment"
}
},
{
"Key": "project",
"Value": "myproject"
}
],
"Environment": {
"Variables": {
"JOB_QUEUE": {
"Ref": "JobQueue"
},
"JOB_DEFINITION": {
"Ref": "TrainingJob"
}
}
},
"Code": {
"S3Bucket": {
"Ref": "CodeBucket"
},
"S3Key": {
"Ref": "StartJobKey"
}
},
"VpcConfig": {
"SubnetIds": [
{
"Fn::ImportValue": {
"Fn::Sub": "${NetworkStackNameParameter}-PrivateSubnet"
}
},
{
"Fn::ImportValue": {
"Fn::Sub": "${NetworkStackNameParameter}-PrivateSubnet2"
}
}
],
"SecurityGroupIds": [
{
"Fn::ImportValue": {
"Fn::Sub": "${NetworkStackNameParameter}-TemplateSecurityGroup"
}
}
]
}
}
}
The following role and policy are also created:
"StartJobRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"RoleName": "myproject-start-job",
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"lambda.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole"
]
}
]
},
"Path": "/"
}
},
"StartJobBatchPolicy": {
"Type": "AWS::IAM::Policy",
"Properties": {
"PolicyName": "start-job-batch-policy",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"batch:ListJobs",
"batch:SubmitJob"
],
"Resource": [
{
"Ref": "JobQueue"
}
]
}
]
},
"Roles": [
{
"Ref": "StartJobRole"
}
]
}
}
In addition, there is a role to enable the lambda to run on a VPC:
"LambdaVPCExecutionRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"RoleName": "myproject-lambda-vpc-execution-role",
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"lambda.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole"
]
}
]
},
"Path": "/"
}
},
"LambdaVPCExecutionPolicy": {
"Type": "AWS::IAM::Policy",
"Properties": {
"PolicyName": "lambda-vpc-execution-policy",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateNetworkInterface",
"ec2:DescribeNetworkInterfaces",
"ec2:DeleteNetworkInterface"
],
"Resource": "*"
}
]
},
"Roles": [
{
"Ref": "LambdaVPCExecutionRole"
},
{
"Ref": "StartJobRole"
}
]
}
},
This is something CloudFormation needs to improve on. Some AWS services don't allow resource level permissions yet when you try creating them your stack will succeed!. For IAM related issues sometimes you need to go into the console and verify your policy is not in a warning state. At a minimum, AWS will flag policies that attempt to apply resource level permissions on services that don't allow it.
For example, for DynamoDB you must grant access to all tables. You can't confine or restict access to a single table. If you try creating a cloudformation IAM policy it will not fail but your desired effect will not be achieved.