I'm having trouble trying to figure out what is required for the signature. I see some examples using hex, and others I see using base64. Which one is it?
Base64.encode64(OpenSSL::HMAC.digest('sha256', getSignatureKey, #policy)).gsub(/\n|\r/, '')
Or:
OpenSSL::HMAC.hexdigest(OpenSSL::Digest.new('sha256'), getSignatureKey, #policy).gsub(/\n|\r/, '')
Okay, so I got it. There are two very important things to consider when creating the signature. A) how the signature is calculated, and B) how your bucket policy is set up. I'm assuming that your CORS are configured to allow a post, and that your IAM user/group has s3 access; and really should only have s3 access.
The bucket policy for the form data requires:
["starts-with", "$key", "{{intended_file_path}}"],
"x-amz-credential",
"x-amz-algorithm",
"x-amz-date",
"bucket"
The ["starts-with", "$key" should be the intended file destination path - ie, "uploads", or "user/jack/", or "images", whatever - see example below.
Here is how I signed my signatures, as well as my bucket policy.
Bucket Config:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Allow Get",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::example-development/*"
},
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789:user/example"
},
"Action": "s3:*",
"Resource": ["arn:aws:s3:::example-development/*","arn:aws:s3:::example-development"]
}
]
}
Backend:
def string_to_sign
#time = Time.now.utc
#time_policy = #time.strftime('%Y%m%dT000000Z')
#date_stamp = #time.strftime('%Y%m%d')
ret = {"expiration" => 10.hours.from_now.utc.iso8601,
"conditions" => [
{"bucket" => ENV["aws_bucket"]},
{"x-amz-credential": "#{ENV["aws_access_key"]}/#{#date_stamp}/us-west-2/s3/aws4_request"},
{"x-amz-algorithm": "AWS4-HMAC-SHA256"},
{ "acl": "public-read" },
{"x-amz-date": #time_policy },
["starts-with", "$key", "uploads"],
]
}
#policy = Base64.encode64(ret.to_json).gsub(/\n|\r/, '')
end
def getSignatureKey
kDate = OpenSSL::HMAC.digest('sha256', ("AWS4" + ENV["aws_secret_key"]), #date_stamp)
kRegion = OpenSSL::HMAC.digest('sha256', kDate, 'us-west-2')
kService = OpenSSL::HMAC.digest('sha256', kRegion, 's3')
kSigning = OpenSSL::HMAC.digest('sha256', kService, "aws4_request")
end
def sig
sig = OpenSSL::HMAC.hexdigest(OpenSSL::Digest.new('sha256'), getSignatureKey, #policy).gsub(/\n|\r/, '')
end
Related
I have an ElasticSearch ip based access policy. I know I can deny based on resources and actions (GET, POST, DELETE, etc). POST, however is a specific beast and can be used to both query and alter data. How do I allow queries to occur and yet prevent alteration of data?
Here is an example ip based access policy that I am expanding on. Certain applications will need POST to function. Analysts, however, should only be able to query the data, so GET, and POST for queries, but I don't want them to be able to alter the data in any manner (no DELETE, PUT, or POST that will alter data).
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"es:ESHttpGET"
],
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"192.0.2.0/24"
]
}
},
"Resource": "arn:aws:es:us-west-1:987654321098:domain/test-domain/*"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"es:ESHttpGET"
],
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"192.0.2.0/24"
]
}
},
"Resource": "arn:aws:es:region:aws-account-id:domain/domain-name/test-index/_search"
},
]
}
I am trying to create a local variable "secret_list" in terraform based on a variable "secrets" that is defined in a tfvars.json file.
The "secrets" variable looks like this:
{
"secrets" : {
"datacore": {
"secrets" : [
{"secret_scope": "datacore", "secret_key": "serviceaccount-databricks-deploy", "secret_value": "dummy"},
{"secret_scope": "datacore", "secret_key": "serviceaccount-datacore", "secret_value": "dummy"}
],
"acls" : [
{"secret_scope": "datacore", "principal": "admins", "permission": "MANAGE"},
{"secret_scope": "datacore", "principal": "Datacore Power Users", "permission": "READ"}
]
},
"ai" : {
"secrets" : [
{"secret_scope": "ai", "secret_key": "serviceaccount-ai", "secret_value": "dummy"},
{"secret_scope": "ai", "secret_key": "serviceaccount-ai-private-key", "secret_value": "dummy"}
],
"acls" : [
{"secret_scope": "ai", "principal": "admins", "permission": "MANAGE"},
{"secret_scope": "ai", "principal": "AI Power Users", "permission": "READ"}
]
}
}
}
its structure is described as:
variable "secrets" {
type = map(object({
secrets = list(
object({
secret_scope = string,
secret_key = string,
secret_value =string
})),
acls = list(
object({
secret_scope = string,
principal = string,
permission = string
}))}))
}
I want to create a new local variable "secret_list" which outputs this:
secret_list = [
{"secret_scope": "datacore", "secret_key": "serviceaccount-databricks-deploy", "secret_value": "dummy"},
{"secret_scope": "datacore", "secret_key": "serviceaccount-databricks-deploy", "secret_value": "dummy"},
{"secret_scope": "ai", "secret_key": "serviceaccount-ai", "secret_value": "dummy"},
{"secret_scope": "ai", "secret_key": "serviceaccount-ai-private-key", "secret_value": "dummy"}
]
This is a list of objects that contains all the secrets that are inside the "secrets" variable.
I have tried to create a local variable "secret_list" using a for loop like this:
locals {
secret_list = {
value = flatten([
for secrets in var.secrets : [
for secret_attributes in secrets.secrets : secret_attributes
]
])
}
}
and created a new output object to view the result in the console:
output "secret_list" {
value = local.secret_list
}
I cannot seem to get the desired output. In the console it looks like:
secret_list = {
+ value = [
+ {
+ secret_key = "serviceaccount-databricks-deploy"
+ secret_scope = "datacore"
+ secret_value = "dummy"
},
+ {
+ secret_key = "serviceaccount-datacore"
+ secret_scope = "datacore"
+ secret_value = "dummy"
},
+ {
+ secret_key = "serviceaccount-ai"
+ secret_scope = "ai"
+ secret_value = "dummy"
},
+ {
+ secret_key = "serviceaccount-ai-private-key"
+ secret_scope = "ai"
+ secret_value = "dummy"
}
]
}
How can I get to:
secret_list = [
{"secret_scope": "datacore", "secret_key": "serviceaccount-databricks-deploy", "secret_value": "dummy"},
{"secret_scope": "datacore", "secret_key": "serviceaccount-databricks-deploy", "secret_value": "dummy"},
{"secret_scope": "ai", "secret_key": "serviceaccount-ai", "secret_value": "dummy"},
{"secret_scope": "ai", "secret_key": "serviceaccount-ai-private-key", "secret_value": "dummy"}
]
To remove the delta between the observed and desired structures, your locals block with the for expressions:
locals {
secret_list = {
value = flatten([
for secrets in var.secrets : [
for secret_attributes in secrets.secrets : secret_attributes
]
])
}
}
needs to not specify an outer map constructor with a key of value. The value of that map should be the entire structure:
locals {
secret_list = flatten([
for secrets in var.secrets : [
for secret_attributes in secrets.secrets : secret_attributes
]
])
}
I´m trying to get the ARN from a DynmoDB table created with #model from the api category.
The ARN is an output from the autogenerated cloudformation template under /amplify/backend/api/{api-name}/build/stacks.
I tried to import the ARN with the following statement in the EventSourceMapping for my Lambda function:
"EventSourceArn": {
"Fn::ImportValue": {
"Fn::Join": [
":",
[
{
"Ref": "apiGraphQLAPIIdOutput"
},
"GetAtt",
"CustomerTable",
"StreamArn"
]
]
}
},
But this throws the an error when pushing to the cloud:
Output 'GetAttCustomerTableStreamArn' not found in stack 'arn:aws:cloudformation:eu-central-1:124149422162:stack/myapp-stage-20191009174227-api-SHBHD6GIS7SD/5fb78d10-eaac-11e9-8a4c-0ac41be8cd2e'
I also added a dependsOn in the backend-config.json, which doesn’t resolve the problem
So, what would be the correct way to get this stream ARN in a cloudformation template of a lambda function?
So, I recently discovered it is indeed possible:
You must add this statement for allowing access to the stream in your policy:
{
"Action": [
"dynamodb:*"
],
"Effect": "Allow",
"Resource": [
{
"Fn::ImportValue": {
"Fn::Join": [
":",
[
{
"Ref": "apiGraphQLAPIIdOutput"
},
"GetAtt",
"CustomerTable",
"StreamArn"
]
]
}
}
]
}
And additionally, add this EventSourceMapping:
"EventSourceArn": {
"Fn::ImportValue": {
"Fn::Join": [
":",
[
{
"Ref": "apiGraphQLAPIIdOutput"
},
"GetAtt",
"CustomerTable",
"StreamArn"
]
]
}
}
Amplify is exporting the Stream ARNs in the folder
amplify\backend\api\{api_name}\build\stacks\{table_name}.json
This worked for me in an existing project and also when setting up a new env.
Does someone know how to setup read-only access to specific log groups?
resource "aws_iam_policy" "logs" {
name = "AWSLogs${title(var.product)}"
description = "Logging policy for ${title(var.product)}"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:Get*",
"logs:List*",
"logs:Filter*"
],
"Resource": [
"arn:aws:logs:::log-group:${aws_cloudwatch_log_group.one.arn}:log-stream:*",
"arn:aws:logs:::log-group:${aws_cloudwatch_log_group.two.arn}:log-stream:*",
"arn:aws:logs:::log-group:${aws_cloudwatch_log_group.three.arn}:log-stream:*"
]
},
{
"Effect": "Allow",
"Action": [
"logs:DescribeLogGroups"
],
"Resource": "*"
}
]
}
EOF
}
resource "aws_iam_group_policy_attachment" "logs" {
group = "${aws_iam_group.logs.name}"
policy_arn = "${aws_iam_policy.logs.arn}"
}
resource "aws_iam_group" "logs" {
name = "${title(var.product)}Logs"
}
I'm currently struggeling to setup to only have access on specified log groups, but I can only get access to them when I set my resource to "*". This is not possible when I setup this dedicated to predefined log groups. Does someone have a good practice or solution? When I try this solution above I only get
Not authorized to perform: logs:FilterLogEvents
, when I try to access it via an user which is part of the IAM group "logs".
aws_cloudwatch_log_group.one.arn already is the full ARN of the one log group, i.e.,
arn:aws:logs:us-east-1:123456789012:log-group:one
So refer only to that in the Resources list:
resource "aws_iam_policy" "logs" {
name = "AWSLogs${title(var.product)}"
description = "Logging policy for ${title(var.product)}"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:Get*",
"logs:List*",
"logs:Filter*"
],
"Resource": [
"${aws_cloudwatch_log_group.one.arn}:log-stream:*",
"${aws_cloudwatch_log_group.two.arn}:log-stream:*",
"${aws_cloudwatch_log_group.three.arn}:log-stream:*"
]
},
{
"Effect": "Allow",
"Action": [
"logs:DescribeLogGroups"
],
"Resource": "*"
}
]
}
EOF
}
I was able to kind of solve this by adding a Condition filtering by tags to the Statement and making sure that the log groups and streams are tagged correspondingly:
"Condition": {
"StringEquals": {
"aws:ResourceTag/sometagname": "sometagvalue"
}
}
I want to create a slack bot with node-red and Conversation service of Watson. This is my code:
[
{
"id":"92984fcb.13597",
"type":"http in",
"z":"f83b7887.f92208",
"name":"watson slack",
"url":"/watson-rg-1",
"method":"post",
"swaggerDoc":"",
"x":96,
"y":116.35000610351562,
"wires":[
[
"119d86bd.3ce22d"
]
]
},
{
"id":"19d684b0.ceb487",
"type":"http request",
"z":"f83b7887.f92208",
"name":"slack response",
"method":"POST",
"ret":"txt",
"url":"https://hooks.slack.com/services/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",
"tls":"",
"x":782,
"y":126.58999633789062,
"wires":[
[
]
]
},
{
"id":"119d86bd.3ce22d",
"type":"switch",
"z":"f83b7887.f92208",
"name":"Command parser",
"property":"payload.text",
"propertyType":"msg",
"rules":[
{
"t":"regex",
"v":"^!coin",
"vt":"str",
"case":false
},
{
"t":"else"
}
],
"checkall":"true",
"outputs":2,
"x":148,
"y":203.27999877929687,
"wires":[
[
"c1ef9e78.1efb1"
],
[
"9d1c215b.7e936",
"1fd76cf2.10950f"
]
]
},
{
"id":"c1ef9e78.1efb1",
"type":"function",
"z":"f83b7887.f92208",
"name":"real payload filter",
"func":"return {\n payload: Math.random() >= 0.5 ? \"heads\" : \"Tails\"\n};",
"outputs":1,
"noerr":0,
"x":416,
"y":282.2799987792969,
"wires":[
[
"7b8c968d.58d104"
]
]
},
{
"id":"7b8c968d.58d104",
"type":"function",
"z":"f83b7887.f92208",
"name":"watson slack message",
"func":"var text = {\n text: msg.payload,\n username : \"watson\"\n};\nreturn {\n payload : JSON.stringify(text)\n};\n",
"outputs":1,
"noerr":0,
"x":644,
"y":198.27999877929687,
"wires":[
[
"19d684b0.ceb487"
]
]
},
{
"id":"9d1c215b.7e936",
"type":"function",
"z":"f83b7887.f92208",
"name":"Get user context",
"func":"msg.payload = msg.payload.text;\nmsg.user = \"toto\";\n//msg.params.context = {};\nreturn msg;",
"outputs":1,
"noerr":0,
"x":332,
"y":448,
"wires":[
[
"f9fc1260.dd5a",
"446d3ae.e966804"
]
]
},
{
"id":"f9fc1260.dd5a",
"type":"watson-conversation-v1",
"z":"f83b7887.f92208",
"name":"",
"workspaceid":"",
"multiuser":false,
"context":true,
"x":512,
"y":448,
"wires":[
[
"d7f5e507.82bf18",
"c5685f38.2a269"
]
]
},
{
"id":"d7f5e507.82bf18",
"type":"function",
"z":"f83b7887.f92208",
"name":"Handle response",
"func":"var user = msg.user;\nvar convContext = flow.get('convContexts')||{};\n\nconvContext[user] = msg.payload.context;\n\nmsg.payload = msg.payload.output.text.join(\"\\n\");\n\nflow.set('convContexts',convContext);\n\nreturn msg;",
"outputs":"1",
"noerr":0,
"x":712,
"y":448,
"wires":[
[
"7b8c968d.58d104"
]
]
},
{
"id":"446d3ae.e966804",
"type":"debug",
"z":"f83b7887.f92208",
"name":"getUserCtx",
"active":true,
"console":"false",
"complete":"payload",
"x":505,
"y":356.5299987792969,
"wires":[
]
},
{
"id":"c5685f38.2a269",
"type":"debug",
"z":"f83b7887.f92208",
"name":"AfterConv",
"active":true,
"console":"false",
"complete":"payload",
"x":677,
"y":508.52996826171875,
"wires":[
]
},
{
"id":"1fd76cf2.10950f",
"type":"debug",
"z":"f83b7887.f92208",
"name":"slack payload",
"active":true,
"console":"false",
"complete":"payload",
"x":135,
"y":388.5299987792969,
"wires":[
]
}
]
But when I test. The first branch works (I did it just to test if the link between slack and node-red works or not) but the other one (with conversation node) doesn't work.
I have two errors:
call to watson conversation service failed
Error:not authorized
If anyone is still facing the issue. Please note, it might be because the username and the password in the conversation node have to be set to the credentials of the conversation service and not the bluemix credentials