I'm stuck on this issue for days and have tried all the methods I can find online. I am trying to decrypt (using AWS go sdk) the AWS lambda environment variable encrypted with KMS. I encrypted the env variables in transit and at rest using the same KMS key.
Screenshot of environment variables encryption
I had also added the policy to lambda function to allow it to have permission to decrypt the variables.
The attached policy looks like the below:
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "kms:*",
"Resource": "arn:aws:kms:us-west-2:602487775829:key/f91a9f1a-9b53-4544-xxxxxxxx",
"Condition": {
"StringEquals": {
"kms:EncryptionContext:LambdaFunctionName": "eLead"
}
}
}
}
And here is my Go code snippet for decrypting the variables(KeyId is not necessary for decrypting, I added it just to ensure it uses the same KMS key to encrypt and decrypt):
import (
"os"
"context"
"encoding/base64"
"github.com/aws/aws-sdk-go-v2/service/kms"
"github.com/aws/aws-lambda-go/lambda"
"github.com/aws/aws-sdk-go-v2/config"
)
var API_key = os.Getenv("API_key")
var API_Secret = os.Getenv("API_Secret")
func Handler(ctx context.Context) error {
cfg, err := config.LoadDefaultConfig(context.TODO(),config.WithRegion("us-west-2"))
API_key_blob, err := base64.StdEncoding.DecodeString(API_key)
API_Secret_blob, err := base64.StdEncoding.DecodeString(API_Secret)
keyID := "arn:aws:kms:us-west-2:602487775829:key/f91a9f1a-9b53-4544-a05b-xxxxxx"
API_key_result, err := kmsClient.Decrypt(context.TODO(), &kms.DecryptInput{
CiphertextBlob: API_key_blob,
KeyId:aws.String(keyID),
})
API_Secret_result, err := kmsClient.Decrypt(context.TODO(),&kms.DecryptInput{
CiphertextBlob: API_Secret_blob,
KeyId:aws.String(keyID),
})
if err != nil {
fmt.Println("Got error decrypting data: ", err)
return err
}
func main() {
lambda.Start(Handler)
}
The error message is:
Got error decrypting data: operation error KMS: Decrypt, https response error StatusCode: 400, RequestID: 629a64b2-ce53-46ad-88fb-1f6ac684919e, InvalidCiphertextException:
I don't understand what else can cause this error.
Any suggestion would help! Thanks!
Related
In our Staging environment, we have credential-less access to our private S3 buckets. Access is granted to individual Docker containers. I am trying to upload a file using PutObject using the aws-sdk-go-v2 SDK library, but I'm continually getting a 403 AccessDenied api error.
My upload code looks like this:
var uploadFileFunc = func(s3Details S3Details, key string, payload []byte, params MetadataParams) (*s3.PutObjectOutput, error) {
client := getS3Client(s3Details)
return client.PutObject(context.TODO(), &s3.PutObjectInput{
Bucket: aws.String(s3Details.Bucket),
Key: aws.String(key),
Body: bytes.NewReader(payload),
ContentType: aws.String("text/xml"),
})
}
func getS3Client(s3Details S3Details) *s3.Client {
endpointResolver := aws.EndpointResolverWithOptionsFunc(func(service, region string, options ...interface{}) (aws.Endpoint, error) {
if s3Details.EndpointUrl != "" {
return aws.Endpoint{
PartitionID: "aws",
URL: s3Details.EndpointUrl,
SigningRegion: s3Details.Region,
SigningMethod: s3Details.SignatureVersion,
}, nil
}
return aws.Endpoint{}, &aws.EndpointNotFoundError{}
})
cfg, _ := config.LoadDefaultConfig(context.TODO(),
config.WithEndpointDiscovery(aws.EndpointDiscoveryEnabled),
config.WithEndpointResolverWithOptions(endpointResolver))
return s3.NewFromConfig(cfg, func(o *s3.Options) {
o.Region = s3Details.Region
o.Credentials = aws.AnonymousCredentials{}
o.UsePathStyle = true
})
}
I am using aws.AnonymousCredentials{} (as our access is credential-less) but this is only to be used for unsigned requests. I cannot use NewStaticCredentialsProvider with empty values for AccessKeyID and/or SecretAccessKey as this will throw a StaticCredentialsEmptyError during the Retrieve(). Adding dummy credentials will throw an error that they are not on record. I am assuming that this is the cause of my 403 AccessDenied.
How do I sign requests without providing credentials in the Go SDK? Is it even possible? In the boto3 Python library this works fine.
First of all, I'll strongly suggest you use the v2 of the AWS SDK of Go. I'll present here how I do this so far.
First, I get the AWS config to use with this code (only relevant parts are shown):
cfg, err := config.LoadDefaultConfig(context.TODO())
if err != nil {
Log.Fatal(err)
}
Here the package used is github.com/aws/aws-sdk-go-v2/config.
Then, I instantiate an s3Client to use for contacting AWS S3 service:
s3Client := s3.NewFromConfig(*cfg)
Here, we use this package github.com/aws/aws-sdk-go-v2/service/s3. Finally, to post your object you have to run this code:
input := &s3.PutObjectInput{
Key: aws.String("test"),
Bucket: aws.String("test"),
Body: bytes.NewReader([]byte("test")),
ACL: types.ObjectCannedACLPrivate,
}
if _, err := s3Client.PutObject(context.TODO(), input); err != nil {
return "", fmt.Errorf("fn UploadFile %w", err)
}
The new package used here is github.com/aws/aws-sdk-go-v2/service/s3/types.
This code is a simplification but you should able to achieve what you need. Furthermore, it should take very little time to update the version of the SDK and you can rely on both of them simultaneously if you've to work with a huge codebase.
Let me know if this helps!
Edit
I updated my solution by using the aws.AnonymousCredentials{} option. Now I was successfully able to upload a file into an s3 bucket with these options. Below you can find the entire solution:
package main
import (
"bytes"
"context"
"crypto/tls"
"net/http"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
)
func GetAwsConfig() (*aws.Config, error) {
cfg, err := config.LoadDefaultConfig(context.TODO(),
// config.WithClientLogMode(aws.LogRequestWithBody|aws.LogResponseWithBody),
config.WithRegion("eu-west-1"),
config.WithHTTPClient(&http.Client{Transport: &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
}}),
config.WithEndpointResolverWithOptions(
aws.EndpointResolverWithOptionsFunc(func(service, region string, options ...interface{}) (aws.Endpoint, error) {
return aws.Endpoint{
PartitionID: "aws",
URL: "http://127.0.0.1:4566",
SigningRegion: "eu-west-1",
HostnameImmutable: true,
}, nil
}),
))
if err != nil {
return nil, err
}
return &cfg, err
}
func main() {
cfg, _ := GetAwsConfig()
s3Client := s3.NewFromConfig(*cfg, func(o *s3.Options) {
o.Credentials = aws.AnonymousCredentials{}
})
if _, err := s3Client.PutObject(context.Background(), &s3.PutObjectInput{
Bucket: aws.String("mybucket"),
Key: aws.String("myfile"),
Body: bytes.NewReader([]byte("hello")),
ACL: types.ObjectCannedACLPrivate,
}); err != nil {
panic(err)
}
}
Before running the code, you've to create the bucket. I used the below command:
aws --endpoint-url=http://localhost:4566 s3 mb s3://mybucket
Thanks to this you can upload the file into the mybucket s3 bucket. To check for the file existence you can issue this command:
aws --endpoint-url=http://localhost:4566 s3 ls s3://mybucket --recursive --human-readable
Hope this helps in solving your issue!
I am trying to run a very simple operation just to list the clusters in a project, using the google cloud sdk for go
package main
import (
"context"
"flag"
"fmt"
"log"
"os"
"google.golang.org/api/container/v1"
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp" // register GCP auth provider
)
var fProjectId = flag.String("projectId", "", "specify a project id to examine")
func main() {
flag.Parse()
if *fProjectId == "" {
log.Fatal("must specific -projectId")
}
ctx := context.TODO()
svc, err := container.NewService(ctx)
if err != nil {
fmt.Println(err)
os.Exit(1)
}
// Ask Google for a list of all kube clusters in the given project.
_, err = svc.Projects.Zones.Clusters.List(*fProjectId, "-").Context(ctx).Do()
if err != nil {
fmt.Println(err)
os.Exit(1)
}
}
the code fails as follows
▶ go run main.go --projectId my-project-id
Get "https://container.googleapis.com/v1/projects/my-project-id/zones/-/clusters?alt=json&prettyPrint=false": oauth2: cannot fetch token: 400 Bad Request
Response: {
"error": "invalid_grant",
"error_description": "reauth related error (rapt_required)",
"error_uri": "https://support.google.com/a/answer/9368756",
"error_subtype": "rapt_required"
}
exit status 1
However the command
gcloud container clusters list
succeeds?
What might be the issue here?
The answer in the link is not very informative.
EDIT: The problem was solved once I run
gcloud auth application-default login
Why is this needed?
I have a publicly exposed lambda that allows users to sign up with my site backed by AWS Cognito; or at least I'm trying to.
func (d *deps) handler(request events.APIGatewayProxyRequest) (events.APIGatewayProxyResponse, error) {
req := Request{}
err := json.Unmarshal([]byte(request.Body), &req)
if err != nil {
return events.APIGatewayProxyResponse{StatusCode: 400}, UserErrResponse
}
session := session2.Must(session2.NewSessionWithOptions(session2.Options{Profile: "default"}))
svc := cognitoidentityprovider.New(session, aws.NewConfig().WithRegion("us-east-1"))
input := &cognitoidentityprovider.SignUpInput{
Password: aws.String(req.Password),
Username: aws.String(req.Email),
ClientId: aws.String("**myclientid**"),
}
_, err = svc.SignUp(input)
if err != nil {
log.Printf("unable to sign up user with err=%v", err.Error())
return events.APIGatewayProxyResponse{StatusCode: 500}, UserErrResponse
}
return events.APIGatewayProxyResponse{StatusCode: 200}, nil
}
This lambda hangs and eventually times out on the "svc.SignUp(input)" line. No error. No feedback. No nothing.
My initial thought was that the lambda does not have permission to connect to Cognito so I gave it all permission as a first debugging measure:
resource "aws_iam_policy" "cognito_sign_in" {
name = "${local.name_prefix}LambdaCogntioPolicy"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"cognito-identity:*",
"cognito-idp:*",
"cognito-sync:*",
]
Effect = "Allow",
Resource = "*"
},
]
})
}
This did not help. Still hangs. What am I missing here?
The goal is to write my own lambda's for signing users up, signing users in, and resetting passwords; all backed by Cognito.
import (
dataflow "cloud.google.com/go/dataflow/apiv1beta3"
"cloud.google.com/go/functions/metadata"
"context"
"fmt"
dataflowpb "google.golang.org/genproto/googleapis/dataflow/v1beta3"
"log"
"os"
)
...
func KickDataflow(ctx context.Context, start, end string) error {
client, err := dataflow.NewTemplatesClient(ctx)
if err != nil {
return err
}
defer client.Close()
req := &dataflowpb.CreateJobFromTemplateRequest{
ProjectId: "xxx",
JobName: fmt.Sprintf("transform-orders_%s", start),
Template: &dataflowpb.CreateJobFromTemplateRequest_GcsPath{
GcsPath: gcsPath,
},
Parameters: map[string]string{
"input": fmt.Sprintf("gs://xxx/order_updates/dt=%s/orders_%s_%s.jsonl", start, start, end),
},
Environment: &dataflowpb.RuntimeEnvironment{
NumWorkers: int32(1),
MaxWorkers: int32(3),
WorkerZone: "asia-northeast1-b",
TempLocation: tempLocation,
ServiceAccountEmail: os.Getenv("SERVICE_ACCOUNT"),
},
Location: "asia-northeast1",
}
_, err = client.CreateJobFromTemplate(ctx, req)
if err != nil {
return err
}
}
It returns the error.
error in kicking dataflow job: rpc error: code = FailedPrecondition desc = (9abc46254fa5372d): The workflow could not be created, since it was sent to an invalid regional endpoint (asia-northeast1). Please resubmit to a valid Cloud Dataflow regional endpoint. The list of Cloud Dataflow regional endpoints is at https://cloud.google.com/dataflow/docs/concepts/regional-endpoints.
There seems to be the same issue in Node.js client.
Google Dataflow - Invalid Regional Endpoint - Impossible set region on template from nodejs client
How to resolve this issue using Go client?
I'm trying to create an index via an indexing job written in Go. I have all the access to ES cluster on AWS and using my access key and secret key.
I can easily create indices using Kibana but when I try to use Go client, it does not work and returns a 403 forbidden error.
AWS Elasticsearch Policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "arn:aws:es:<region>:111111111111:domain/prod-elasticsearch/*"
}
]
}
indexing.go
package main
import (
"flag"
"fmt"
"log"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/olivere/elastic/v7"
"github.com/spf13/viper"
aws "github.com/olivere/elastic/v7/aws/v4"
)
func main() {
var (
accessKey = viper.GetString("aws.access_key")
secretKey = viper.GetString("aws.secret_key")
url = viper.GetString("elasticsearch.host")
sniff = flag.Bool("sniff", false, "Enable or disable sniffing")
region = flag.String("region", "ap-southeast-1", "AWS Region name")
)
if url == "" {
log.Fatal("please specify a URL with -url")
}
if accessKey == "" {
log.Fatal("missing -access-key or AWS_ACCESS_KEY environment variable")
}
if secretKey == "" {
log.Fatal("missing -secret-key or AWS_SECRET_KEY environment variable")
}
if *region == "" {
log.Fatal("please specify an AWS region with -region")
}
creds := credentials.NewStaticCredentials(accessKey, secretKey, "")
_, err := creds.Get()
if err != nil {
log.Fatal("Wrong credentials: ", err)
}
signingClient := aws.NewV4SigningClient(creds, *region)
// Create an Elasticsearch client
client, err := elastic.NewClient(
elastic.SetURL(url),
elastic.SetSniff(*sniff),
elastic.SetHealthcheck(*sniff),
elastic.SetHttpClient(signingClient),
)
if err != nil {
log.Fatal(err)
}
// This part gives 403 forbidden error
indices, err := client.IndexNames()
if err != nil {
log.Fatal(err)
}
// Just a status message
fmt.Println("Connection succeeded")
}
config.toml
[elasticsearch]
host = "https://vpc-prod-elasticsearch-111111.ap-southeast-1.es.amazonaws.com"
[aws]
access_key = <ACCESS_KEY>
secret_key = <SECRET_KEY>
The Go code looks good. The 403 error forbidden was coming up due to some issues in the AWS secret key.
To debug such an issue:
Check if the VPC subnet has been removed in which AWS Elasticsearch cluster is operational. If the VPC subnet has been removed, then it will always throw a 403 error.
If subnets configuration is fine, then it's most probably an issue with AWS secrets. You can create a new AWS application user for the AWS Elasticsearch and retry. This should work fine.
I solved the above issue following the above 2 points.