Does someone know how to setup read-only access to specific log groups?
resource "aws_iam_policy" "logs" {
name = "AWSLogs${title(var.product)}"
description = "Logging policy for ${title(var.product)}"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:Get*",
"logs:List*",
"logs:Filter*"
],
"Resource": [
"arn:aws:logs:::log-group:${aws_cloudwatch_log_group.one.arn}:log-stream:*",
"arn:aws:logs:::log-group:${aws_cloudwatch_log_group.two.arn}:log-stream:*",
"arn:aws:logs:::log-group:${aws_cloudwatch_log_group.three.arn}:log-stream:*"
]
},
{
"Effect": "Allow",
"Action": [
"logs:DescribeLogGroups"
],
"Resource": "*"
}
]
}
EOF
}
resource "aws_iam_group_policy_attachment" "logs" {
group = "${aws_iam_group.logs.name}"
policy_arn = "${aws_iam_policy.logs.arn}"
}
resource "aws_iam_group" "logs" {
name = "${title(var.product)}Logs"
}
I'm currently struggeling to setup to only have access on specified log groups, but I can only get access to them when I set my resource to "*". This is not possible when I setup this dedicated to predefined log groups. Does someone have a good practice or solution? When I try this solution above I only get
Not authorized to perform: logs:FilterLogEvents
, when I try to access it via an user which is part of the IAM group "logs".
aws_cloudwatch_log_group.one.arn already is the full ARN of the one log group, i.e.,
arn:aws:logs:us-east-1:123456789012:log-group:one
So refer only to that in the Resources list:
resource "aws_iam_policy" "logs" {
name = "AWSLogs${title(var.product)}"
description = "Logging policy for ${title(var.product)}"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:Get*",
"logs:List*",
"logs:Filter*"
],
"Resource": [
"${aws_cloudwatch_log_group.one.arn}:log-stream:*",
"${aws_cloudwatch_log_group.two.arn}:log-stream:*",
"${aws_cloudwatch_log_group.three.arn}:log-stream:*"
]
},
{
"Effect": "Allow",
"Action": [
"logs:DescribeLogGroups"
],
"Resource": "*"
}
]
}
EOF
}
I was able to kind of solve this by adding a Condition filtering by tags to the Statement and making sure that the log groups and streams are tagged correspondingly:
"Condition": {
"StringEquals": {
"aws:ResourceTag/sometagname": "sometagvalue"
}
}
Related
I have an ElasticSearch ip based access policy. I know I can deny based on resources and actions (GET, POST, DELETE, etc). POST, however is a specific beast and can be used to both query and alter data. How do I allow queries to occur and yet prevent alteration of data?
Here is an example ip based access policy that I am expanding on. Certain applications will need POST to function. Analysts, however, should only be able to query the data, so GET, and POST for queries, but I don't want them to be able to alter the data in any manner (no DELETE, PUT, or POST that will alter data).
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"es:ESHttpGET"
],
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"192.0.2.0/24"
]
}
},
"Resource": "arn:aws:es:us-west-1:987654321098:domain/test-domain/*"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"es:ESHttpGET"
],
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"192.0.2.0/24"
]
}
},
"Resource": "arn:aws:es:region:aws-account-id:domain/domain-name/test-index/_search"
},
]
}
I am using elasticsearch 5.6 with xpack plugin.
my kibana user connects to elstic with read_only role.
"read_only": {
"cluster": [
"monitor"
],
"indices" : [
{
"names" : [ "my-index-*" ],
"privileges" : ["read", "view_index_metadata"]
},
{
"names" : [ ".kibana*"],
"privileges" : ["read", "view_index_metadata"]
}
]
}
"kibana_system": {
"cluster": [
"monitor",
"cluster:admin/xpack/monitoring/bulk"
],
"indices": [
{
"names": [
".kibana*",
".reporting-*"
],
"privileges": [
"all"
]
},
{
"names": [
".monitoring-*"
],
"privileges": [
"read"
]
}
],
"run_as": [],
"metadata": {
"_reserved": true
},
"transient_metadata": {
"enabled": true
}
}
It succeed to connect only if I added "kibana_system" role to the user in addition to "read_only" role.
What does "kibana_system" role for?
How can I grant less permission to my user? without "kibana_system" I need read only for my-index-*
You just need to add the kibana_user role and the monitoring_user role to your user and you'll be good to go.
No change necessary to the read_only role.
From the same page, the kibana_system role...
...should not be assigned to users as the granted permissions may change between releases.
I´m trying to get the ARN from a DynmoDB table created with #model from the api category.
The ARN is an output from the autogenerated cloudformation template under /amplify/backend/api/{api-name}/build/stacks.
I tried to import the ARN with the following statement in the EventSourceMapping for my Lambda function:
"EventSourceArn": {
"Fn::ImportValue": {
"Fn::Join": [
":",
[
{
"Ref": "apiGraphQLAPIIdOutput"
},
"GetAtt",
"CustomerTable",
"StreamArn"
]
]
}
},
But this throws the an error when pushing to the cloud:
Output 'GetAttCustomerTableStreamArn' not found in stack 'arn:aws:cloudformation:eu-central-1:124149422162:stack/myapp-stage-20191009174227-api-SHBHD6GIS7SD/5fb78d10-eaac-11e9-8a4c-0ac41be8cd2e'
I also added a dependsOn in the backend-config.json, which doesn’t resolve the problem
So, what would be the correct way to get this stream ARN in a cloudformation template of a lambda function?
So, I recently discovered it is indeed possible:
You must add this statement for allowing access to the stream in your policy:
{
"Action": [
"dynamodb:*"
],
"Effect": "Allow",
"Resource": [
{
"Fn::ImportValue": {
"Fn::Join": [
":",
[
{
"Ref": "apiGraphQLAPIIdOutput"
},
"GetAtt",
"CustomerTable",
"StreamArn"
]
]
}
}
]
}
And additionally, add this EventSourceMapping:
"EventSourceArn": {
"Fn::ImportValue": {
"Fn::Join": [
":",
[
{
"Ref": "apiGraphQLAPIIdOutput"
},
"GetAtt",
"CustomerTable",
"StreamArn"
]
]
}
}
Amplify is exporting the Stream ARNs in the folder
amplify\backend\api\{api_name}\build\stacks\{table_name}.json
This worked for me in an existing project and also when setting up a new env.
I'm generating a CloudFormation template with several AWS Lambda functions. As part of the CloudFormation template I also want to add a subscription filter so that CloudWatch logs will be sent to a different account.
However, since I don't know the name of the logs groups at advance and couldn't find any way to have a reference to them I wasn't able to solve it.
Is there a way to do so?
You can try to use custom function to invoke your lambda which in turn can run your lambda with a test payload or something like that, which eventually will create a log stream then you can refer to that log group for subscription as mentioned by praveen earlier.
You can use a function to get the log group name. For example:
"LogGroupName": {
"Fn::Join": [
"",
[
"/aws/lambda/",
{
"Ref": "MyLambdaFunction"
}
]
]
}
Note that MyLambdaFunction is the name of your Lambda function block in the CloudFormation template.
The way Serverless does it should work for you. It creates a log group resource with a name matching what your Lambda function will use. You can then reference that log group wherever you need it. You will have to give your Lambda function a name and not use the default naming behavior. You can use the stack name to make it unique.
Something like:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"FunctionLogGroup": {
"Type": "AWS::Logs::LogGroup",
"Properties": {
"LogGroupName": {
"Fn::Sub": "/aws/lambda/MyFunction-${AWS::StackName}"
}
}
},
"MyFunctionNameRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"ManagedPolicyArns": ["arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"],
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [{
"Action": ["sts:AssumeRole"],
"Effect": "Allow",
"Principal": {
"Service": ["lambda.amazonaws.com"]
}
}]
}
}
},
"MyFunction": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Code": {
"ZipFile": "def index():\n return 'hello world'\n"
},
"FunctionName": {
"Fn::Sub": "MyFunction-${AWS::StackName}"
},
"Handler": "handler.index",
"MemorySize": 128,
"Role": {
"Fn::GetAtt": [
"MyFunctionNameRole",
"Arn"
]
},
"Runtime": "python3.6"
},
"DependsOn": [
"FunctionLogGroup"
]
},
"MySubscriptionFilter": {
"Type" : "AWS::Logs::SubscriptionFilter",
"Properties" : {
"DestinationArn": "TODO TODO",
"FilterPattern": "",
"LogGroupName": {"Ref": "FunctionLogGroup"},
"RoleArn": "TODO TODO"
}
}
}
}
I'm having trouble trying to figure out what is required for the signature. I see some examples using hex, and others I see using base64. Which one is it?
Base64.encode64(OpenSSL::HMAC.digest('sha256', getSignatureKey, #policy)).gsub(/\n|\r/, '')
Or:
OpenSSL::HMAC.hexdigest(OpenSSL::Digest.new('sha256'), getSignatureKey, #policy).gsub(/\n|\r/, '')
Okay, so I got it. There are two very important things to consider when creating the signature. A) how the signature is calculated, and B) how your bucket policy is set up. I'm assuming that your CORS are configured to allow a post, and that your IAM user/group has s3 access; and really should only have s3 access.
The bucket policy for the form data requires:
["starts-with", "$key", "{{intended_file_path}}"],
"x-amz-credential",
"x-amz-algorithm",
"x-amz-date",
"bucket"
The ["starts-with", "$key" should be the intended file destination path - ie, "uploads", or "user/jack/", or "images", whatever - see example below.
Here is how I signed my signatures, as well as my bucket policy.
Bucket Config:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Allow Get",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::example-development/*"
},
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789:user/example"
},
"Action": "s3:*",
"Resource": ["arn:aws:s3:::example-development/*","arn:aws:s3:::example-development"]
}
]
}
Backend:
def string_to_sign
#time = Time.now.utc
#time_policy = #time.strftime('%Y%m%dT000000Z')
#date_stamp = #time.strftime('%Y%m%d')
ret = {"expiration" => 10.hours.from_now.utc.iso8601,
"conditions" => [
{"bucket" => ENV["aws_bucket"]},
{"x-amz-credential": "#{ENV["aws_access_key"]}/#{#date_stamp}/us-west-2/s3/aws4_request"},
{"x-amz-algorithm": "AWS4-HMAC-SHA256"},
{ "acl": "public-read" },
{"x-amz-date": #time_policy },
["starts-with", "$key", "uploads"],
]
}
#policy = Base64.encode64(ret.to_json).gsub(/\n|\r/, '')
end
def getSignatureKey
kDate = OpenSSL::HMAC.digest('sha256', ("AWS4" + ENV["aws_secret_key"]), #date_stamp)
kRegion = OpenSSL::HMAC.digest('sha256', kDate, 'us-west-2')
kService = OpenSSL::HMAC.digest('sha256', kRegion, 's3')
kSigning = OpenSSL::HMAC.digest('sha256', kService, "aws4_request")
end
def sig
sig = OpenSSL::HMAC.hexdigest(OpenSSL::Digest.new('sha256'), getSignatureKey, #policy).gsub(/\n|\r/, '')
end