laravel filesystem using S3 - custom metadata - laravel

Amazon S3 allows you to attach custom metadata to objects however I have been unable to figure out how to access this data using the laravel filesystem. Searching shows little information about this. Does anyone know how to access that data?
EDIT: I found a method in Storage to display the metadata HOWEVER it does not seem to include the custom meta key and string I added to the image via the S3 control panel.
return Storage::getMetaData($path);
results:
{
"path":"toolkit\/social-media\/facebook\/cover-image\/SG-Chivalry-Facebook-Cover-Co-Branded.jpg",
"dirname":"toolkit\/social-media\/facebook\/cover-image","basename":"SG-Chivalry-Facebook-Cover-Co-Branded.jpg",
"extension":"jpg",
"filename":"SG-Chivalry-Facebook-Cover-Co-Branded",
"timestamp":1460581502,
"size":"113476",
"mimetype":"image\/jpeg",
"type":"file"
}

Okay I found a solution using php get_headers function:
return get_headers("file.jpg");
Returns all the metadata including custom metadata fields from S3

Related

The specified key does not exist - Laravel

I'm trying to create a temporary URL for a file in my application. I'm able to upload the file to S3 bucket and I'm able to use the method \Storage::temporaryUrl($this->url, now()->addHour(1)) generating the following URL
https://xxxxxx.s3.eu-west-2.amazonaws.com/https%3A//xxxxxxx.s3.eu-west-2.amazonaws.com/images/fYr20cgYh3nAwoEEQCOTaVTLLo7nRFrXjp7cYcCz.jpg?X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAVHH6TLEV3Z2FBWLY%2F20210622%2Feu-west-2%2Fs3%2Faws4_request&X-Amz-Date=20210622T191649Z&X-Amz-SignedHeaders=host&X-Amz-Expires=3600&X-Amz-Signature=6300aa81e69c6f4c96cb6f319a2b5ed2bfc2b2767138994928a49f3f93906745
When I click on this URL I get the following error:
The specified key does not exist.
From this question https://stackoverflow.com/a/28654035/4581336 I have checked the following:
the file name exists in my bucket and it is a copy-paste of the file under the Object URL in my bucket
I tried removing the extension from the file to see if it affects the URL but no luck as well. Having xxxx.jpg vs xxxxx as the file name is the same
I'm a fairly new guy in the AWS world so I will copy-paste things that I think might be important to help solve the issue.
The IAM user created has the following permissions:
AmazonS3FullAccess
The bucket Block Public Access settings for this account has everything blocked:
My bucket public permission has everything enabled as well:
I'm currently logged in as a root user (which therefore I'm assuming I can do whatever I want since I'm the root)
If I do all my buckets public I'm able to access the files using the extracted URL generated by the temporaryUrl method
The final goal
The objective of the bucket is to have a place to store files that are uploaded by users in my application. I don't want to have all the files public so I would like to restrict users to the files they own so I create a temporary URL with
Storage::disk('s3')->temporaryUrl($url, now()->addMinutes(30));
Since I'm fairly new to storing files in S3 my logic may be flawed. Please, correct me if this is not how it's supposed to go.
Questions I have looked at but didn't help me
Amazon S3 exception: "The specified key does not exist"
CloudFront + S3 Website: "The specified key does not exist" when an implicit index document should be displayed
aws-sdk: NoSuchKey: The specified key does not exist?
NoSuchKey: The specified key does not exist AWS S3 Codebuild
AWS::S3::NoSuchKey The specified key does not exist
In your first URL, it seems like you've got the hostname there twice - https://xxxxxx.s3.eu-west-2.amazonaws.com appears once, and then a second time encoded. Are you storing the full hostname in the $url parameter to temporaryUrl? You should only be passing the key (the actual filename) to that method.
The error doesn't sound like a permission error - it appears as though you have access to the bucket but just aren't getting the file path correct.

ValueError: The 'field_name' attribute has no file associated with it - only when updating?

I use DRF and django-storages to upload files to a S3 bucket.
I can create from the admin or from a POST to the API endpoint any new entry with a file.
But for some reason, as soon as I try to update the instance (be it from the admin or from the API endpoint), I get a ValueError that seems to indicate that DRF can't find the file.
But when I use Django's admin or shell, I see that the file has been correctly saved (FieldFile on instance) and that the file has been saved in the bucket.
I don't override anything: not the admin forms, not the serializer's update method, not the view's retrieve method, etc.
I have no idea how that could even be possible.
Any suggestion?
I ended up finding the solution. Somewhere in my code one of the serializer's fields had a to_representation() method that was returning the instance instead of the instance.url. I hope that this will help anyone.

Can I serve files stored in Google Cloud Storage via a http.FileServer in golang?

I have developed a small web application that runs a web server in golang.
Each user can login, view the list of their docs (previously uploaded) and click on an item to view an html page that shows some fields of the document plus an tag with a src attribute
The src attribute includes an url like "mydocuments/download/123-456-789.pdf"
On the server side I handle the URL ("mydocuments/download/*") via an http Handler
mymux.HandleFunc(pat.Get("/mydocuments/download/:docname"), DocDownloadHandler)
where:
I check that the user has the rights to view the document in the url
Then I create a fileserver that obviously re-maps the url to the real path of the folder where the files are stored on the filesystem of the server
fileServer := http.StripPrefix("/mydocs/download/",http.FileServer(http.Dir("/the-real-path-to-documents-folder/user-specific-folder/)))
and of course I serve the files
fileServer.ServeHTTP(w, r)
IMPORTANT: the directory where the documents are stored is not the static-files directory I sue for the website but a directory where all files end after being uploaded by users.
My QUESTION
As I am trying to convert the code for it to work also on Google Cloud, I am trying to change the code so that files are stored in a bucket (or, better in "sub-directories" -as they do not properly exist- of a bucket).
How can I modify the code so to map the real document url as available via the cloud storage bucket?
Can I still use the http.FileServer technique above (if so what should I use instead of http.Dir to map the bucket "sub-folder" path where the documents are stored)?
I hope I was enough clear to explain my issue...
I apologise in advance for any unclear point...
Some options are:
Give the user direct access to the resource using a signed URL.
Write code to proxy the request to GCS.
Use http.FS with an fs.FS backed by GCS.
It's possible that a fs.FS for GCS already exists, but you may need to write one.
You can use http.FileSystem since it is an interface and can be implemented however you like.

Uploading images to Spring Boot and S3 all In-Memory

I have an Angular webapp that uses a Spring Boot REST service as its backing web service.
I am adding a "Profiles" feature for users, and as part of this I want to stand up an endpoint that allows users to upload profile images for themselves and immediately upload those files to S3 (where I will host all the images from).
Looking at several Spring Boot/file upload tutorials :
http://www.mkyong.com/spring-boot/spring-boot-file-upload-example/
I update avatar image and display it but the avatar does not change in Spring Boot , why?
Many others
It seems that the standard way of handling such file upload is exposing a controller endpoint that accepts MultipartFiles like so:
#RestController
#RequestMapping("/v1/profiles")
public class ProfileController {
#PostMapping("/photo")
public ResponseEntity uploadProfilePhoto(#RequestParam("mpf") MultipartFile mpf)
// ...
}
Looking at all this code, I can't tell if the MultipartFile instance is in-memory or if Spring sets its location somewhere (perhaps under /tmp?) on the disk.
Looking at the AWS S3 Java SDK tutorial, it seems the standard way to upload a disk-based File is like so:
File file = new File(uploadFileName);
s3client.putObject(new PutObjectRequest(bucketName, keyName, file));
So it looks like I must have a File on disk in order to upload to S3.
I'm wondering if there is a way to keep everything in memory, or whether this is a bad idea and I should stick to disks/File instances!
Is there a way to keep the entire profile image (MultipartFile) in-mempory inside the controller method?
Is there a way to feed (maybe via serialization?!) a MultipartFile instance to S3's PutObjectRequest?
Or is this all a terrible idea (if so, why?!)?
Is there a way to keep the entire profile image (MultipartFile) in-mempory inside the controller method?
No, there is NO way to keep an image File in-memory because File object in java represents a path in file system.
Is there a way to feed (maybe via serialization?!) a MultipartFile instance to S3's PutObjectRequest?
No, from S3's API documentation, there is no way for S3 to deserialize to the image file for you after/during the upload.
Or is this all a terrible idea (if so, why?!)?
It depends on your specific case but it is generally not preferred.
If - there are not many users uploading images at the same time, your memory is probably enough to handle.
Else - You can easily get out-of-memory problems.
If you insist on doing so, S3 API can upload an InputStream (If I remember correctly). You can convert your Multipart File to an InputStream.
This SO thread talks about uploading to S3 with InputStream
You can also take a look at File.createTempFile() to create a temp file.
I have been looking at the same thing. Basically you want a user to be able to be able to upload a photo album and have those photos served from S3 and probably have them secured so only that user can upload/delete/etc.
I believe the simpler answer is in spring boot to get a Pre-signed URL from S3. https://docs.aws.amazon.com/AmazonS3/latest/dev/PresignedUrlUploadObjectJavaSDK.html
which basically gives you a token defining the bucket, and object key ("/bobs_profile/smiling_bob.jpg") and a time limit for that image to be uploaded.
Give that to your angular app (or ionic app) to upload the image to that location.
That should do it. but someone let me know if I'm wrong.
The only issue that I see is if bob wants to upload "bobs_nude_photo.jpg" and only wants spring security logged in people to be able to see it... well I'm sure there is an S3 solution for that??

Marking inserted objects as public with GoLang Google Cloud Storage API

I'm using Golang package storage v1 to upload files to Google Cloud Storage,
using the following method:
func (r *ObjectsService) Insert(bucket string, object *Object) *ObjectsInsertCall
Insert: Stores a new object and metadata.
Everything works great except I'm not sure how to publicly expose uploaded files, using Google's developers console I can manually set a file public by clicking the Public link checkbox,
Any idea how do I achieve the same result using the above API? an example would be highly appreciated
There's a PredefinedAcl function on ObjectsInsertCall. Predefined ACLs are described in the API documentation, but one of them is "public-read", which marks the object as globally viewable.

Resources