In our Staging environment, we have credential-less access to our private S3 buckets. Access is granted to individual Docker containers. I am trying to upload a file using PutObject using the aws-sdk-go-v2 SDK library, but I'm continually getting a 403 AccessDenied api error.
My upload code looks like this:
var uploadFileFunc = func(s3Details S3Details, key string, payload []byte, params MetadataParams) (*s3.PutObjectOutput, error) {
client := getS3Client(s3Details)
return client.PutObject(context.TODO(), &s3.PutObjectInput{
Bucket: aws.String(s3Details.Bucket),
Key: aws.String(key),
Body: bytes.NewReader(payload),
ContentType: aws.String("text/xml"),
})
}
func getS3Client(s3Details S3Details) *s3.Client {
endpointResolver := aws.EndpointResolverWithOptionsFunc(func(service, region string, options ...interface{}) (aws.Endpoint, error) {
if s3Details.EndpointUrl != "" {
return aws.Endpoint{
PartitionID: "aws",
URL: s3Details.EndpointUrl,
SigningRegion: s3Details.Region,
SigningMethod: s3Details.SignatureVersion,
}, nil
}
return aws.Endpoint{}, &aws.EndpointNotFoundError{}
})
cfg, _ := config.LoadDefaultConfig(context.TODO(),
config.WithEndpointDiscovery(aws.EndpointDiscoveryEnabled),
config.WithEndpointResolverWithOptions(endpointResolver))
return s3.NewFromConfig(cfg, func(o *s3.Options) {
o.Region = s3Details.Region
o.Credentials = aws.AnonymousCredentials{}
o.UsePathStyle = true
})
}
I am using aws.AnonymousCredentials{} (as our access is credential-less) but this is only to be used for unsigned requests. I cannot use NewStaticCredentialsProvider with empty values for AccessKeyID and/or SecretAccessKey as this will throw a StaticCredentialsEmptyError during the Retrieve(). Adding dummy credentials will throw an error that they are not on record. I am assuming that this is the cause of my 403 AccessDenied.
How do I sign requests without providing credentials in the Go SDK? Is it even possible? In the boto3 Python library this works fine.
First of all, I'll strongly suggest you use the v2 of the AWS SDK of Go. I'll present here how I do this so far.
First, I get the AWS config to use with this code (only relevant parts are shown):
cfg, err := config.LoadDefaultConfig(context.TODO())
if err != nil {
Log.Fatal(err)
}
Here the package used is github.com/aws/aws-sdk-go-v2/config.
Then, I instantiate an s3Client to use for contacting AWS S3 service:
s3Client := s3.NewFromConfig(*cfg)
Here, we use this package github.com/aws/aws-sdk-go-v2/service/s3. Finally, to post your object you have to run this code:
input := &s3.PutObjectInput{
Key: aws.String("test"),
Bucket: aws.String("test"),
Body: bytes.NewReader([]byte("test")),
ACL: types.ObjectCannedACLPrivate,
}
if _, err := s3Client.PutObject(context.TODO(), input); err != nil {
return "", fmt.Errorf("fn UploadFile %w", err)
}
The new package used here is github.com/aws/aws-sdk-go-v2/service/s3/types.
This code is a simplification but you should able to achieve what you need. Furthermore, it should take very little time to update the version of the SDK and you can rely on both of them simultaneously if you've to work with a huge codebase.
Let me know if this helps!
Edit
I updated my solution by using the aws.AnonymousCredentials{} option. Now I was successfully able to upload a file into an s3 bucket with these options. Below you can find the entire solution:
package main
import (
"bytes"
"context"
"crypto/tls"
"net/http"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
)
func GetAwsConfig() (*aws.Config, error) {
cfg, err := config.LoadDefaultConfig(context.TODO(),
// config.WithClientLogMode(aws.LogRequestWithBody|aws.LogResponseWithBody),
config.WithRegion("eu-west-1"),
config.WithHTTPClient(&http.Client{Transport: &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
}}),
config.WithEndpointResolverWithOptions(
aws.EndpointResolverWithOptionsFunc(func(service, region string, options ...interface{}) (aws.Endpoint, error) {
return aws.Endpoint{
PartitionID: "aws",
URL: "http://127.0.0.1:4566",
SigningRegion: "eu-west-1",
HostnameImmutable: true,
}, nil
}),
))
if err != nil {
return nil, err
}
return &cfg, err
}
func main() {
cfg, _ := GetAwsConfig()
s3Client := s3.NewFromConfig(*cfg, func(o *s3.Options) {
o.Credentials = aws.AnonymousCredentials{}
})
if _, err := s3Client.PutObject(context.Background(), &s3.PutObjectInput{
Bucket: aws.String("mybucket"),
Key: aws.String("myfile"),
Body: bytes.NewReader([]byte("hello")),
ACL: types.ObjectCannedACLPrivate,
}); err != nil {
panic(err)
}
}
Before running the code, you've to create the bucket. I used the below command:
aws --endpoint-url=http://localhost:4566 s3 mb s3://mybucket
Thanks to this you can upload the file into the mybucket s3 bucket. To check for the file existence you can issue this command:
aws --endpoint-url=http://localhost:4566 s3 ls s3://mybucket --recursive --human-readable
Hope this helps in solving your issue!
Related
Since a few days, I'm trying to download file from Huawei, and more precisely on their cloud storage. The issue is, I haven't been able to connect to it...
I found a SDK from huawei : https://github.com/huaweicloud/huaweicloud-sdk-go-v3
But I'm a little bit lost on all the protocol I can use to connect to it, and I haven't been able to make one working, each time. And to be honest, documentations isn't really helping me...
I also found this : https://github.com/huaweicloud/huaweicloud-sdk-go-obs
There is an example of downloading a file. Here, I can't even connect to Huawei... In the AppGalery, project settings, I downloaded the configuration file and tried the endpoint, but without success...
Here is what I tried with obs (I know/guess it should be agc, but I haven't found a package for it), but not working due to the host...
/**
* This sample demonstrates how to download an object
* from OBS in different ways using the OBS SDK for Go.
*/
package huawei
import (
"fmt"
"io"
"github.com/huaweicloud/huaweicloud-sdk-go-obs/obs"
)
type DownloadSample struct {
bucketName string
objectKey string
location string
obsClient *obs.ObsClient
}
func newDownloadSample(ak, sk, endpoint, bucketName, objectKey, location string) *DownloadSample {
obsClient, err := obs.New(ak, sk, endpoint)
if err != nil {
panic(err)
}
return &DownloadSample{obsClient: obsClient, bucketName: bucketName, objectKey: objectKey, location: location}
}
func (sample DownloadSample) GetObject() {
input := &obs.GetObjectInput{}
input.Bucket = sample.bucketName
input.Key = sample.objectKey
fmt.Printf("%+v\n", input)
output, err := sample.obsClient.GetObject(input)
if err != nil {
panic(err)
}
defer func() {
errMsg := output.Body.Close()
if errMsg != nil {
panic(errMsg)
}
}()
fmt.Println("Object content:")
body, err := io.ReadAll(output.Body)
if err != nil {
panic(err)
}
fmt.Println(string(body))
fmt.Println()
}
func RunDownloadSample() {
const (
endpoint = "theEndPointInConfigJSONFile"
ak = "prettySureOfThis"
sk = "prettySureOfThis"
bucketName = "prettySureOfThis"
objectKey = "test.txt" // a txt in the bucket to try to download it
location = ""
)
sample := newDownloadSample(ak, sk, endpoint, bucketName, objectKey, location)
fmt.Println("Download object to string")
sample.GetObject()
}
Thank you for your help
I am trying to run Golang Azure SDK code to get a list of RGs in my subscriptions but I am getting the following error:
2022/01/22 20:25:58 MSI not available
exit status 1
import (
"context"
"fmt"
"github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2020-10-01/resources"
"github.com/Azure/go-autorest/autorest/azure/auth"
"github.com/Azure/go-autorest/autorest/to"
"log"
"os"
)
func main() {
authorize, err := auth.NewAuthorizerFromEnvironment()
if err != nil {
log.Fatal(err)
}
subscriptionID := os.Getenv("AZURE_SUB_ID")
//Read resource groups
resGrpClient := resources.NewGroupsClient(subscriptionID)
resGrpClient.Authorizer = authorize
//Read resources within the resource group
resClient := resources.NewClient(subscriptionID)
resClient.Authorizer = authorize
for resGrpPage, err := resGrpClient.List(context.Background(), "", nil); resGrpPage.NotDone(); err = resGrpPage.Next() {
if err != nil {
log.Fatal(err)
}
for _, resGrp := range resGrpPage.Values() {
fmt.Println("Resource Group Name: ", to.String(resGrp.Name))
resList, _ := resClient.ListByResourceGroup(context.Background(), to.String(resGrp.Name), "", "", nil)
for _, res := range resList.Values() {
fmt.Println("\t- Resource Name: ", to.String(res.Name), " | Resource Type: ", to.String(res.Type))
}
}
}
}
I am using Goland and trying to run the app in WSL Ubuntu
Solution is to use auth.NewAuthorizerFromCLI(), as auth.NewAuthorizerFromEnvironment does not use the Cli and MSI stands for managed system identity.
Please read this documentation Use environment-based authentication
You have a couple of options
and they need specific environments variables to be present.
In my case, I use Client credentials so I need to have these 3 envs present when I run my code.
AZURE_CLIENT_ID
AZURE_CLIENT_SECRET
AZURE_TENANT_ID
I am trying to use the aws-sdk-go-v2 to retrieve some data from an S3 bucket. In order to do so I need to be able to set the Request Payer option, however, since I am new to using the SDK, I have no idea how to do so.
I've tried setting this as an env variable AWS_REQUEST_PAYER=requester, but scanning the source code for this golang SDK quickly, I couldn't find that it would be picked up by the SDK as an option.
Using the SDK as directed also fails with an Unauthorized response:
import (
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/feature/s3/manager"
"github.com/aws/aws-sdk-go-v2/service/s3"
)
type awsArchive struct {
bucket string
config config.Config
client *s3.Client
}
func (s *awsArchive) download(uri string, dest string) error {
downloader := manager.NewDownloader(s.client)
paginator := s3.NewListObjectsV2Paginator(s.client, &s3.ListObjectsV2Input{
Bucket: &s.bucket,
Prefix: &uri,
})
for paginator.HasMorePages() {
page, err := paginator.NextPage(context.TODO())
if err != nil {
return err
}
for _, obj := range page.Contents {
if err := downloadToFile(downloader, dest, s.bucket, aws.ToString(obj.Key)); err != nil {
return err
}
}
}
return nil
}
func downloadToFile(downloader *manager.Downloader, targetDirectory, bucket, key string) error {
// Create the directories in the path
file := filepath.Join(targetDirectory, key)
if err := os.MkdirAll(filepath.Dir(file), 0775); err != nil {
return err
}
// Set up the local file
fd, err := os.Create(file)
if err != nil {
return err
}
defer fd.Close()
// Download the file using the AWS SDK for Go
fmt.Printf("Downloading s3://%s/%s to %s...\n", bucket, key, file)
_, err = downloader.Download(context.TODO(), fd, &s3.GetObjectInput{Bucket: &bucket, Key: &key})
return err
}
Error: operation error S3: ListObjectsV2, https response error StatusCode: 403, RequestID: ..., HostID: ..., api error AccessDenied: Access Denied
Would anyone be able to provide me with an example of using the Golang SDK to get S3 files from a Requestor Pays enabled bucket please, i.e. the equivalent of:
aws s3 sync --request-payer requester source_bucket destination_folder
It seems you can use field named 'RequestPayer' in the GetObjectInput struct. Found it from pkg document.
From the link:
type GetObjectInput struct {
// The bucket name containing the object. When using this action with an access
...
Bucket *string
// Key of the object to get.
// This member is required.
Key *string
...
...
// Confirms that the requester knows that they will be charged for the request.
// Bucket owners need not specify this parameter in their requests. For information
// about downloading objects from requester pays buckets, see Downloading Objects
// in Requestor Pays Buckets
// (https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html)
// in the Amazon S3 Developer Guide.
RequestPayer types.RequestPayer
You can refer to 'RequestPayer' definition here.
I started using AWS SAM and for now I only have some unit tests, but I want to try to run integration tests in a pre traffic hook function.
Unfortunately there seems to be no code example for Golang, all I could find was for Javascript.
From this example I pieced together that I have to use the code deploy SDK and call PutLifecycleEventHookExecutionStatus, but the specifics remain unclear. The aws code example repo for go has no examples for code deploy either.
More Information about the topic that I am looking for is available here https://github.com/awslabs/serverless-application-model/blob/master/docs/safe_lambda_deployments.rst#pretraffic-posttraffic-hooks.
I want to start out by testing a lambda function that simply queries DynamoDB.
Something like this works:
package main
import (
"context"
"encoding/json"
"github.com/aws/aws-lambda-go/lambda"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/codedeploy"
)
type CodeDeployEvent struct {
DeploymentId string `json:"deploymentId"`
LifecycleEventHookExecutionId string `json:"lifecycleEventHookExecutionId"`
}
func HandleRequest(ctx context.Context, event CodeDeployEvent) (string, error) {
// add some tests here and change status flag as needed . . .
client := codedeploy.New(session.New())
params := &codedeploy.PutLifecycleEventHookExecutionStatusInput{
DeploymentId: &event.DeploymentId,
LifecycleEventHookExecutionId: &event.LifecycleEventHookExecutionId,
Status: "Succeeded",
}
req, _ := client.PutLifecycleEventHookExecutionStatusRequest(params)
_ = req.Send()
}
I got around to implement this and want to share my complete solution.
After figuring out how to use it, I decided against using it, because there are a couple of drawbacks.
there is no way to expose a new version of the canary to a dedicated portion of the user base, that means sometimes they'll hit the new or the old version
invoking functions that publish to sns will trigger all downstream actions, which might get the new or the old version of the downstream services, which would cause a lot of problems in case of breaking APIs
IAM changes affect both version immediately, possibly breaking the old version.
Instead, I deploy everything to a pre prod account, run my integration and e2e tests and if they succeed I'll deploy to prod
the cdk code to create a canary deployment:
const versionAlias = new lambda.Alias(this, 'Alias', {
aliasName: "alias",
version: this.lambda.currentVersion,
})
const preHook = new lambda.Function(this, 'LambdaPreHook', {
description: "pre hook",
code: lambda.Code.fromAsset('dist/upload/convert-pre-hook'),
handler: 'main',
runtime: lambda.Runtime.GO_1_X,
memorySize: 128,
timeout: cdk.Duration.minutes(1),
environment: {
FUNCTION_NAME: this.lambda.currentVersion.functionName,
},
reservedConcurrentExecutions: 5,
logRetention: RetentionDays.ONE_WEEK,
})
// this.lambda.grantInvoke(preHook) // this doesn't work, I need to grant invoke to all functions :s
preHook.addToRolePolicy(new iam.PolicyStatement({
actions: [
"lambda:InvokeFunction",
],
resources: ["*"],
effect: iam.Effect.ALLOW,
}))
const application = new codedeploy.LambdaApplication(this, 'CodeDeployApplication')
new codedeploy.LambdaDeploymentGroup(this, 'CanaryDeployment', {
application: application,
alias: versionAlias,
deploymentConfig: codedeploy.LambdaDeploymentConfig.ALL_AT_ONCE,
preHook: preHook,
autoRollback: {
failedDeployment: true,
stoppedDeployment: true,
deploymentInAlarm: false,
},
ignorePollAlarmsFailure: false,
// alarms:
// autoRollback: codedeploy.A
// postHook:
})
My go code of the pre hook function. PutLifecycleEventHookExecutionStatus tells code deploy if the pre hook succeeded or not. Unfortunately in case you fail the deployment message, the message you get in the cdk deploy output is utterly useless, so you need to check the pre/post hook logs.
In order to actually run the integration test I simply invoke the lambda and check if an error occurred.
package main
import (
"encoding/base64"
"fmt"
"log"
"os"
"github.com/aws/aws-lambda-go/lambda"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/codedeploy"
lambdaService "github.com/aws/aws-sdk-go/service/lambda"
)
var svc *codedeploy.CodeDeploy
var lambdaSvc *lambdaService.Lambda
type codeDeployEvent struct {
DeploymentId string `json:"deploymentId"`
LifecycleEventHookExecutionId string `json:"lifecycleEventHookExecutionId"`
}
func handler(e codeDeployEvent) error {
params := &codedeploy.PutLifecycleEventHookExecutionStatusInput{
DeploymentId: &e.DeploymentId,
LifecycleEventHookExecutionId: &e.LifecycleEventHookExecutionId,
}
err := handle()
if err != nil {
log.Println(err)
params.Status = aws.String(codedeploy.LifecycleEventStatusFailed)
} else {
params.Status = aws.String(codedeploy.LifecycleEventStatusSucceeded)
}
_, err = svc.PutLifecycleEventHookExecutionStatus(params)
if err != nil {
return fmt.Errorf("failed putting the lifecycle event hook execution status. the status was %s", *params.Status)
}
return nil
}
func handle() error {
functionName := os.Getenv("FUNCTION_NAME")
if functionName == "" {
return fmt.Errorf("FUNCTION_NAME not set")
}
log.Printf("function name: %s", functionName)
// invoke lambda via sdk
input := &lambdaService.InvokeInput{
FunctionName: &functionName,
Payload: nil,
LogType: aws.String(lambdaService.LogTypeTail), // returns the log in the response
InvocationType: aws.String(lambdaService.InvocationTypeRequestResponse), // synchronous - default
}
err := input.Validate()
if err != nil {
return fmt.Errorf("validating the input failed: %v", err)
}
resp, err := lambdaSvc.Invoke(input)
if err != nil {
return fmt.Errorf("failed to invoke lambda: %v", err)
}
decodeString, err := base64.StdEncoding.DecodeString(*resp.LogResult)
if err != nil {
return fmt.Errorf("failed to decode the log: %v", err)
}
log.Printf("log result: %s", decodeString)
if resp.FunctionError != nil {
return fmt.Errorf("lambda was invoked but returned error: %s", *resp.FunctionError)
}
return nil
}
func main() {
sess, err := session.NewSession()
if err != nil {
return
}
svc = codedeploy.New(sess)
lambdaSvc = lambdaService.New(sess)
lambda.Start(handler)
}
I have the following working code to delete an object from Amazon s3
params := &s3.DeleteObjectInput{
Bucket: aws.String("Bucketname"),
Key : aws.String("ObjectKey"),
}
s3Conn.DeleteObjects(params)
But what i want to do is to delete all files under a folder using wildcard **. I know amazon s3 doesn't treat "x/y/file.jpg" as a folder y inside x but what i want to achieve is by mentioning "x/y*" delete all the subsequent objects having the same prefix. Tried amazon multi object delete
params := &s3.DeleteObjectsInput{
Bucket: aws.String("BucketName"),
Delete: &s3.Delete{
Objects: []*s3.ObjectIdentifier {
{
Key : aws.String("x/y/.*"),
},
},
},
}
result , err := s3Conn.DeleteObjects(params)
I know in php it can be done easily by s3->delete_all_objects as per this answer. Is the same action possible in GOlang.
Unfortunately the goamz package doesn't have a method similar to the PHP library's delete_all_objects.
However, the source code for the PHP delete_all_objects is available here (toggle source view): http://docs.aws.amazon.com/AWSSDKforPHP/latest/#m=AmazonS3/delete_all_objects
Here are the important lines of code:
public function delete_all_objects($bucket, $pcre = self::PCRE_ALL)
{
// Collect all matches
$list = $this->get_object_list($bucket, array('pcre' => $pcre));
// As long as we have at least one match...
if (count($list) > 0)
{
$objects = array();
foreach ($list as $object)
{
$objects[] = array('key' => $object);
}
$batch = new CFBatchRequest();
$batch->use_credentials($this->credentials);
foreach (array_chunk($objects, 1000) as $object_set)
{
$this->batch($batch)->delete_objects($bucket, array(
'objects' => $object_set
));
}
$responses = $this->batch($batch)->send();
As you can see, the PHP code will actually make an HTTP request on the bucket to first get all files matching PCRE_ALL, which is defined elsewhere as const PCRE_ALL = '/.*/i';.
You can only delete 1000 files at once, so delete_all_objects then creates a batch function to delete 1000 files at a time.
You have to create the same functionality in your go program as the goamz package doesn't support this yet. Luckily it should only be a few lines of code, and you have a guide from the PHP library.
It might be worth submitting a pull request for the goamz package once you're done!
Using the mc tool you can do:
mc rm -r --force https://BucketName.s3.amazonaws.com/x/y
it will delete all the objects with the prefix "x/y"
You can achieve the same with Go using minio-go like this:
package main
import (
"log"
"github.com/minio/minio-go"
)
func main() {
config := minio.Config{
AccessKeyID: "YOUR-ACCESS-KEY-HERE",
SecretAccessKey: "YOUR-PASSWORD-HERE",
Endpoint: "https://s3.amazonaws.com",
}
// find Your S3 endpoint here http://docs.aws.amazon.com/general/latest/gr/rande.html
s3Client, err := minio.New(config)
if err != nil {
log.Fatalln(err)
}
isRecursive := true
for object := range s3Client.ListObjects("BucketName", "x/y", isRecursive) {
if object.Err != nil {
log.Fatalln(object.Err)
}
err := s3Client.RemoveObject("BucketName", object.Key)
if err != nil {
log.Fatalln(err)
continue
}
log.Println("Removed : " + object.Key)
}
}
Since this question was asked, the AWS GoLang lib for S3 has received some new methods in S3 Manager to handle this task (in response to #Itachi's pr).
See Github record: https://github.com/aws/aws-sdk-go/issues/448#issuecomment-309078450
Here is their example in v1: https://github.com/awsdocs/aws-doc-sdk-examples/blob/main/go/s3/DeleteObjects/DeleteObjects.go#L36
To get "wildcard matching" on paths inside the bucket, add the Prefix param to the example's ListObjectsInput call, as shown here:
iter := s3manager.NewDeleteListIterator(svc, &s3.ListObjectsInput{
Bucket: bucket,
Prefix: aws.String("somePathString"),
})
A bit late in the game, but since I was having the same problem, I created a small pkg that you can copy to your code base and import as needed.
func ListKeysInPrefix(s s3iface.S3API, bucket, prefix string) ([]string, error) {
res, err := s.Client.ListObjectsV2(&s3.ListObjectsV2Input{
Bucket: aws.String(bucket),
Prefix: aws.String(prefix),
})
if err != nil {
return []string{}, err
}
var keys []string
for _, key := range res.Contents {
keys = append(keys, *key.Key)
}
return keys, nil
}
func createDeleteObjectsInput(keys []string) *s3.Delete {
rm := []*s3.ObjectIdentifier{}
for _, key := range keys {
rm = append(rm, &s3.ObjectIdentifier{Key: aws.String(key)})
}
return &s3.Delete{Objects: rm, Quiet: aws.Bool(false)}
}
func DeletePrefix(s s3iface.S3API, bucket, prefix string) error {
keys, err := s.ListKeysInPrefix(bucket, prefix)
if err != nil {
panic(err)
}
_, err = s.Client.DeleteObjects(&s3.DeleteObjectsInput{
Bucket: aws.String(bucket),
Delete: s.createDeleteObjectsInput(keys),
})
if err != nil {
return err
}
return nil
}
So, in the case you have a bucket called "somebucket" with the following structure: s3://somebucket/foo/some-prefixed-folder/bar/test.txt and wanted to delete from some-prefixed-folder onwards, usage would be:
func main() {
// create your s3 client here
// client := ....
err := DeletePrefix(client, "somebucket", "some-prefixed-folder")
if err != nil {
panic(err)
}
}
This implementation only allows to delete a maximum of 1000 entries from the given prefix due ListObjectsV2 implementation - but it is paginated, so it's a matter of adding the functionality to keep refreshing results until results are < 1000.