I am using the tusd library to upload a file directly to S3 in Go. It seems to be functioning however tusd uploads two files a .info metadata file and a .bin actual content file. For some reason my code is only uploading the info file.
The documentation is quite tricky to navigate so perhaps I have missed a setting somewhere
Code as gist to show both the server and the client code.
There are mutiple issues here.
Your tus libary import paths are wrong they should be:
"github.com/tus/tusd/pkg/handler"
"github.com/tus/tusd/pkg/s3store"
You dont use the S3 store propely, you setup a configuration to have storage directly on your server
fStore := filestore.FileStore{
Path: "./uploads",
}
Instead it should be something like this:
// S3 acces configuration
s3Config := &aws.Config{
Region: aws.String(os.Getenv("AWS_REGION")),
Credentials: credentials.NewStaticCredentials(os.Getenv("AWS_ACCESS_KEY_ID"), os.Getenv("AWS_SECRET_ACCESS_KEY"), ""),
DisableSSL: aws.Bool(true),
S3ForcePathStyle: aws.Bool(true),
}
// Setting up the s3 storage
s3Store := s3store.New(os.Getenv("AWS_BUCKET_NAME"), s3.New(session.Must(session.NewSession()), s3Config))
// Creates a new and empty store composer
composer := handler.NewStoreComposer()
// UseIn sets this store as the core data store in the passed composer and adds all possible extension to it.
s3Store.UseIn(composer)
// Setting up handler
handler, err := handler.NewHandler(handler.Config{
BasePath: "/files/",
StoreComposer: composer,
})
if err != nil {
panic(fmt.Errorf("Unable to create handler: %s", err))
}
// Listen and serve
http.Handle("/files/", http.StripPrefix("/files/", handler))
err = http.ListenAndServe(":8080", nil)
if err != nil {
panic(fmt.Errorf("Unable to listen: %s", err))
}
It is possible that your client isnt working proprely also (I didnt test it).
I would recommend you use https://github.com/eventials/go-tus instead of trying to implement the protocol by yourself.
Related
Im using the following code which works as expected, I use from the cli gcloud auth application-default login and enter my credentials and I was able to run the code successfully from my macbook.
Now I need to run this code in my CI and we need to use different approach , what should be the approach to get the client_secret
and client_id or service account / some ENV variable, what is the way for doing it via GO code?
import "google.golang.org/api/compute/v1"
project := "my-project"
region := "my-region"
ctx := context.Background()
c, err := google.DefaultClient(ctx, compute.CloudPlatformScope)
if err != nil {
log.Fatal(err)
}
computeService, err := compute.New(c)
if err != nil {
log.Fatal(err)
}
req := computeService.Routers.List(project, region)
if err := req.Pages(ctx, func(page *compute.RouterList) error {
for _, router := range page.Items {
// process each `router` resource:
fmt.Printf("%#v\n", router)
// NAT Gateways are found in router.nats
}
return nil
}); err != nil {
log.Fatal(err)
}
Since you're using Jenkins you probably want to start with how to create a service account. It guides you on creating a service account and exporting a key to be set as a var in another CI/CD system.
Then refer to the docs from the client library on how to create a new client with source credential.
e.g.
client, err := storage.NewClient(ctx, option.WithCredentialsFile("path/to/keyfile.json"))
If you provided no source, it would attempt to read the credentials locally and act as the service account running the operation (not applicable in your use case).
Many CIs support the export of specific env vars. Or your script / conf can do it too.
But if you want to run in a CI why you need such configuration? Integration tests?
Some services can be used locally for unit/smoke testing. Like pubsub, there is a way to run a fake/local pubsub to perform some tests.
Or perhaps I did not understand your question, in this case can you provide an example?
I have a function that works locally and I have a config file that loads into the app using Viper, I also have viper.AutomaticEnv() set.
After deploying to AWS Lambda, seems like env vars are ignored. I went over to the Viper issues page and found this: https://github.com/spf13/viper/issues/584
Looks like Viper requires a config file to load or it will just stop working even though we can set env vars.
How do you handle local dev vs deployment for lambda secrets in Go?
I would like to avoid AWS Secrets Manager if possible
There are a lot of options how to handle secrets in AWS Lambdas. I'd recommend to not use Viper or any of those tools. Building a Lambda that reads configuration from environment Lambdas is simple.
That said, I would also recommend reading secrets from AWS SSM parameter store.
main.go
func (h handler) handleRequest() error {
fmt.Printf("My secret: %s", h.config.secret)
return nil
}
type configuration struct {
secret string
}
type handler struct {
config configuration
}
func newConfig() (configuration, error) {
secret, ok := os.LookupEnv("SECRET")
if !ok {
return configuration{}, errors.New("can not read environment variable 'SECRET'")
}
return configuration{
secret: secret
}, nil
}
func main() {
cfg, err := newConfig()
if err != nil {
fmt.Printf("unable to create configuration: %v\n", err)
os.Exit(1)
}
h := handler{
config: cfg,
}
lambda.Start(h.handleRequest)
}
No need to use Viper and increase your binaries size unnecessarily. Remember: larger binary, longer cold-start time.
How do you handle local dev vs deployment for lambda secrets in Go?
Usually, we only use unit tests locally that use mocked services that do not require secrets. Most of the "integration" testing is done in AWS. Every developer has their own "environment" that they can deploy. To manage this we use Terraform.
If you really need to do test something locally, I'd recommend to create a test file that you do "gitignore" to avoid committing it. In this file I just hard-code the secret.
So for example, you can have a playground_test.go which you ignore in your .gitignore file.
I want to get list of all the docker images that I have uploaded on a particular project.
I am using the official SDK https://pkg.go.dev/cloud.google.com/go
As per my understanding the listing should be under the container module. But, I am not able to find any method called ListImages OR ListRepositories that can server the purpose.
I checked out the artifactregistry module, but it seems that it is only useful in case I push my images to artifact registry.
What is the correct way to get listing of docker images (project wise), in golang ?
I don't think we have a client library for the containerregistry. Since it's just an implementation of the Docker Registry API according to this thread at least
Have you tried with the Registry package ? https://pkg.go.dev/github.com/docker/docker/registry
For the repository you can use hostname/project-id. Where hostname is gcr.io or eu.gcr.io or us.gcr.io depending on how your repositories in GCR are configured.
golang approach (tested after running: gcloud auth configure-docker):
package main
import (
"context"
"fmt"
"log"
"github.com/google/go-containerregistry/pkg/authn"
gcr "github.com/google/go-containerregistry/pkg/name"
"github.com/google/go-containerregistry/pkg/v1/google"
"github.com/google/go-containerregistry/pkg/v1/remote"
)
func main() {
auth, err := google.NewGcloudAuthenticator()
if err != nil {
log.Fatal(err)
}
fmt.Println(auth)
registry, err := gcr.NewRegistry("gcr.io")
if err != nil {
log.Fatal(err)
}
ctx := context.Background()
repos, err := remote.Catalog(ctx, registry, remote.WithAuthFromKeychain(authn.DefaultKeychain))
if err != nil {
log.Fatal(err)
}
for _, repo := range repos {
fmt.Println(repo)
}
}
response:
&{0xc0000000}
project-id/imagename
REST API Approach:
you can list the images like this (replace the gcr.io with your gcr endpoint such as gcr.io, us.gcr.io, asia.gcr.io etc.):
curl -H "Authorization: Bearer $(gcloud auth print-access-token)" "https://gcr.io/v2/_catalog"
response:
{"repositories":["project-id/imagename"]}
you can find some related details regarding the token fetching from this link:
How to list images and tags from the gcr.io Docker Registry using the HTTP API?
PS: you will have to filter out the project specific images from response if you have access to multiple projects
I'm writing an application in Go where user can upload a file and it would be eventually uploaded to Amazon S3 bucket. I've written an endpoint using which user can upload a file as multipart form data. Once the file is uploaded, I'm uploading it to S3 bucket.
func UploadRoutes(route *gin.Engine) {
route.POST("/upload", uploadHandler)
}
func uploadHandler(context *gin.Context) {
fileHeader, err := context.FormFile("file")
// check err
file, err := fileHeader.Open()
// check err
// uploads to S3 bucket
err = utils.Upload(file, fileHeader.Filename)
}
But I am not sure where the uploaded file data is being store between two uploads. It seems like file would be in memory after upload from user is completed and before it is uploaded to S3 bucket (Reference: https://pkg.go.dev/mime/multipart#File)
If that's the case, large file uploads would consume too much server memory. As a workaround, I can think of writing the file on disk and then initiating multipart upload to S3 bucket. Are there better alternatives?
Here is a related question but the I think I'm already using multipart upload as suggested in the answer: AWS S3 uploading/downloading huge files with low memory footprint
you can use Upload, you can pass the multipart reader directly to s3 instead of storing files in server before uploading
func uploadHandler(context *gin.Context) {
fileHeader, err := context.FormFile("file")
// check err
config := &aws.Config{
Region: aws.String("us-west-1"), // your region here
}
AwsSession := session.Must(session.NewSession(config))
uploader := s3manager.NewUploader(AwsSession)
result, err := uploader.Upload(&s3manager.UploadInput{
Bucket: aws.String(bucketname),
Key: aws.String(filePath),
Body: fileHeader, // you can pass the reader here
})
}
I'd like to create Signed URLs to Google Cloud Storage resources from an app deployed using CloudRun.
I set up CloudRun with a custom Service Account with the GCS role following this guide.
My intent was to use V4 Signing to create Signed URLs from CloudRun. There is a guide for this use-case where a file service_account.json is used to generate JWT config. This works for me on localhost when I download the file from google's IAM. I'd like to avoid having this file committed in the repository use the one that I provided in CloudRun UI.
I was hoping that CloudRun injects this service account file to the app container and makes it accessible in GOOGLE_APPLICATION_CREDENTIALS variable but that's not the case.
Do you have a recommendation on how to do this? Thank you.
As you say, Golang Storage Client Libraries require a service account json file to sign urls.
There is currently a feature request open in GitHub for this but you should be able to work this around with this sample that I found here:
import (
"context"
"fmt"
"time"
"cloud.google.com/go/storage"
"cloud.google.com/go/iam/credentials/apiv1"
credentialspb "google.golang.org/genproto/googleapis/iam/credentials/v1"
)
const (
bucketName = "bucket-name"
objectName = "object"
serviceAccount = "[PROJECTNUMBER]-compute#developer.gserviceaccount.com"
)
func main() {
ctx := context.Background()
c, err := credentials.NewIamCredentialsClient(ctx)
if err != nil {
panic(err)
}
opts := &storage.SignedURLOptions{
Method: "GET",
GoogleAccessID: serviceAccount,
SignBytes: func(b []byte) ([]byte, error) {
req := &credentialspb.SignBlobRequest{
Payload: b,
Name: serviceAccount,
}
resp, err := c.SignBlob(ctx, req)
if err != nil {
panic(err)
}
return resp.SignedBlob, err
},
Expires: time.Now().Add(15*time.Minute),
}
u, err := storage.SignedURL(bucketName, objectName, opts)
if err != nil {
panic(err)
}
fmt.Printf("\"%v\"", u)
}
Cloud Run (and other compute platforms) does not inject a service account key file. Instead, they make access_tokens available on the instance metadata service. You can then exchange this access token with a JWT.
However, often times, Google’s client libraries and gcloud works out of the box on GCP’s compute platforms without explicitly needing to authenticate. So if you use the instructions on the page you linked (gcloud or code samples) it should be working out-of-the-box.