Marking inserted objects as public with GoLang Google Cloud Storage API - go

I'm using Golang package storage v1 to upload files to Google Cloud Storage,
using the following method:
func (r *ObjectsService) Insert(bucket string, object *Object) *ObjectsInsertCall
Insert: Stores a new object and metadata.
Everything works great except I'm not sure how to publicly expose uploaded files, using Google's developers console I can manually set a file public by clicking the Public link checkbox,
Any idea how do I achieve the same result using the above API? an example would be highly appreciated

There's a PredefinedAcl function on ObjectsInsertCall. Predefined ACLs are described in the API documentation, but one of them is "public-read", which marks the object as globally viewable.

Related

Can I serve files stored in Google Cloud Storage via a http.FileServer in golang?

I have developed a small web application that runs a web server in golang.
Each user can login, view the list of their docs (previously uploaded) and click on an item to view an html page that shows some fields of the document plus an tag with a src attribute
The src attribute includes an url like "mydocuments/download/123-456-789.pdf"
On the server side I handle the URL ("mydocuments/download/*") via an http Handler
mymux.HandleFunc(pat.Get("/mydocuments/download/:docname"), DocDownloadHandler)
where:
I check that the user has the rights to view the document in the url
Then I create a fileserver that obviously re-maps the url to the real path of the folder where the files are stored on the filesystem of the server
fileServer := http.StripPrefix("/mydocs/download/",http.FileServer(http.Dir("/the-real-path-to-documents-folder/user-specific-folder/)))
and of course I serve the files
fileServer.ServeHTTP(w, r)
IMPORTANT: the directory where the documents are stored is not the static-files directory I sue for the website but a directory where all files end after being uploaded by users.
My QUESTION
As I am trying to convert the code for it to work also on Google Cloud, I am trying to change the code so that files are stored in a bucket (or, better in "sub-directories" -as they do not properly exist- of a bucket).
How can I modify the code so to map the real document url as available via the cloud storage bucket?
Can I still use the http.FileServer technique above (if so what should I use instead of http.Dir to map the bucket "sub-folder" path where the documents are stored)?
I hope I was enough clear to explain my issue...
I apologise in advance for any unclear point...
Some options are:
Give the user direct access to the resource using a signed URL.
Write code to proxy the request to GCS.
Use http.FS with an fs.FS backed by GCS.
It's possible that a fs.FS for GCS already exists, but you may need to write one.
You can use http.FileSystem since it is an interface and can be implemented however you like.

Uploading images to Spring Boot and S3 all In-Memory

I have an Angular webapp that uses a Spring Boot REST service as its backing web service.
I am adding a "Profiles" feature for users, and as part of this I want to stand up an endpoint that allows users to upload profile images for themselves and immediately upload those files to S3 (where I will host all the images from).
Looking at several Spring Boot/file upload tutorials :
http://www.mkyong.com/spring-boot/spring-boot-file-upload-example/
I update avatar image and display it but the avatar does not change in Spring Boot , why?
Many others
It seems that the standard way of handling such file upload is exposing a controller endpoint that accepts MultipartFiles like so:
#RestController
#RequestMapping("/v1/profiles")
public class ProfileController {
#PostMapping("/photo")
public ResponseEntity uploadProfilePhoto(#RequestParam("mpf") MultipartFile mpf)
// ...
}
Looking at all this code, I can't tell if the MultipartFile instance is in-memory or if Spring sets its location somewhere (perhaps under /tmp?) on the disk.
Looking at the AWS S3 Java SDK tutorial, it seems the standard way to upload a disk-based File is like so:
File file = new File(uploadFileName);
s3client.putObject(new PutObjectRequest(bucketName, keyName, file));
So it looks like I must have a File on disk in order to upload to S3.
I'm wondering if there is a way to keep everything in memory, or whether this is a bad idea and I should stick to disks/File instances!
Is there a way to keep the entire profile image (MultipartFile) in-mempory inside the controller method?
Is there a way to feed (maybe via serialization?!) a MultipartFile instance to S3's PutObjectRequest?
Or is this all a terrible idea (if so, why?!)?
Is there a way to keep the entire profile image (MultipartFile) in-mempory inside the controller method?
No, there is NO way to keep an image File in-memory because File object in java represents a path in file system.
Is there a way to feed (maybe via serialization?!) a MultipartFile instance to S3's PutObjectRequest?
No, from S3's API documentation, there is no way for S3 to deserialize to the image file for you after/during the upload.
Or is this all a terrible idea (if so, why?!)?
It depends on your specific case but it is generally not preferred.
If - there are not many users uploading images at the same time, your memory is probably enough to handle.
Else - You can easily get out-of-memory problems.
If you insist on doing so, S3 API can upload an InputStream (If I remember correctly). You can convert your Multipart File to an InputStream.
This SO thread talks about uploading to S3 with InputStream
You can also take a look at File.createTempFile() to create a temp file.
I have been looking at the same thing. Basically you want a user to be able to be able to upload a photo album and have those photos served from S3 and probably have them secured so only that user can upload/delete/etc.
I believe the simpler answer is in spring boot to get a Pre-signed URL from S3. https://docs.aws.amazon.com/AmazonS3/latest/dev/PresignedUrlUploadObjectJavaSDK.html
which basically gives you a token defining the bucket, and object key ("/bobs_profile/smiling_bob.jpg") and a time limit for that image to be uploaded.
Give that to your angular app (or ionic app) to upload the image to that location.
That should do it. but someone let me know if I'm wrong.
The only issue that I see is if bob wants to upload "bobs_nude_photo.jpg" and only wants spring security logged in people to be able to see it... well I'm sure there is an S3 solution for that??

laravel filesystem using S3 - custom metadata

Amazon S3 allows you to attach custom metadata to objects however I have been unable to figure out how to access this data using the laravel filesystem. Searching shows little information about this. Does anyone know how to access that data?
EDIT: I found a method in Storage to display the metadata HOWEVER it does not seem to include the custom meta key and string I added to the image via the S3 control panel.
return Storage::getMetaData($path);
results:
{
"path":"toolkit\/social-media\/facebook\/cover-image\/SG-Chivalry-Facebook-Cover-Co-Branded.jpg",
"dirname":"toolkit\/social-media\/facebook\/cover-image","basename":"SG-Chivalry-Facebook-Cover-Co-Branded.jpg",
"extension":"jpg",
"filename":"SG-Chivalry-Facebook-Cover-Co-Branded",
"timestamp":1460581502,
"size":"113476",
"mimetype":"image\/jpeg",
"type":"file"
}
Okay I found a solution using php get_headers function:
return get_headers("file.jpg");
Returns all the metadata including custom metadata fields from S3

Upload public object to Google Cloud Storage with public link

I've searched everywhere on this but I cannot find a solution, how can I upload a public object to my google cloud storage, I want to have it so once the image is uploaded it can be viewed by anyone in the world.
It seems I can only get this done if I manually click the public link in google storage, but I want to have it so I can automatically make these objects public through googles api .
The web interface doesn't provide a way to make the objects being uploaded public automatically, but you can do one of two things:
If you want to just make objects publicly readable during one particular session you could use gsutil to do it, e.g.,
gsutil -m cp -a public-read dir/* gs://your-bucket
If you want to make objects publicly readable across all future sessions you could set a default object ACL on the bucket, using a command like:
gsutil defacl set public-read gs://your-bucket
If you do that, uploads via the web interface (as well as by any other API requests, e.g., gsutil cp commands) will be made publicly readable automatically.

Google static map API getting 403 forbidden when loading from img tag

What I have is a Google map that shows the location of a property but when I come to print the dynamic maps dont print so good so I decided to implement the Google Static Map image API.
http://lpoc.co.uk/properties-for-sale/property/oldgate-dairy-st-james-road-long-sutton-cambridgeshire-pe12/?prop-print=1
^^ is an example of a property in print view and should show a static map image but it fails to load and looking at my inspector I'm getting a 403 Forbiden response for the image.
But if I go to the URL directly the image loads...
What am I doing wrong?
Thanks
Scott
This has gotten quite a lot of views, so I'm adding my solution to the problem here:
When using the new API, make sure you generate a Key for browser apps (with referers) and also make sure the patterns match your URL.
E.g. when requesting from example.com your pattern should be
example.com/*
When you're requesting from www.example.com:
*.example.com/*
So make sure you check whether a subdomain is present and allow both patterns in the developer console.
Visit the Developer Console.
Under API Keys, click the pencil icon to edit.
Under "Key restrictions", ensure that you have an entry for example.com/*, *.example.com/*, and any local testing domains you might want.
There seems to be some confusion here, and since this thread is highly ranked on Google, it seems relevant to clarify.
Google has a couple of different API's to use for their maps service:
Javascript API
The old version of this API was version 2, which required a key. This version is deprecated, and it is recommended to upgrade to the newer version 3. Note that the documentation still states that you need a key for this to function, except if you're using "Google Maps API for Business".
Static Maps API
This is a whole different story. Static maps is a service that does not require any javascript. You simply call an url, and Google will return a maps image, making it possible to insert the URL directly into your <img> tag.
The newest version is version 2, and this requires a key to function because a usage limit is applied.
A key can be requested here:
https://code.google.com/apis/console
And the key should be added to the request for the correct image to be generated:
http://maps.googleapis.com/maps/api/staticmap?center=New+York,NY&zoom=13&size=600x300&key=API_console_key
I hope this clears up some confusion.
I had this same problem but my solution was different. I had the V2 maps api enabled, but not the static maps api (I thought this was V2). I enabled the static maps api and it worked.
Oops I feel like such an idiot. I was using the old V2 maps API URL and not the new V3 API URL. I was getting a 403 because I was using the V2 URL without providing an API key :(
Be hundred percent sure of these points: (for static maps)
Enable your project at this url :
https://console.developers.google.com/apis/api/static_maps_backend/overview?project=
You have your localhost, staging and production - all urls with wildcards enabled in the referrer section.
Google has changed its policy and you now need an api key to display maps. refer this for more : Google Maps API without key?
Hope it helps.
Staticmaps V3 doesn't need the "Key" attribute and removing it seems to solve the <img> source problem.
Try with an URL like this:
http://maps.googleapis.com/maps/api/staticmap?center=0.0000,0.0000&zoom=13&size=200x200&maptype=roadmap&markers=0.0000,0.0000&sensor=false
For more information read this.
Yeah, Google Maps API version 3 were java-script version; "Google Static Maps" latest were 2.0. I suspect there might be some restriction on use.
I could also not display static maps and could see 403 error in the browser's network console.
http response headers:
status:403
x-content-type-options:nosniff
I had an API key with a lot of Google Maps APIs enabled but the Google Static Maps API was missing, enabling it solved the issue.
now you should use 'signature' parameter, which you should add to request - otherwise static maps won't work.
here is few useful links
1) how to generate signature
2) how to make signature on BE side (code snippet)
I am using Wordpress 4.9.4 with ChurchThemes Exodus Theme. I had applied for & generated a New API_KEY.
I confirmed it was being used when calling the map:
Google Map Link
However the Js Console showed the following error:
Google Maps Error in Js Console
As Johnny White mentioned above I had to navigate to the API Library Screen via APIs & Services Menu:
enter image description here
You will be greeted by the API Library screen:
API Library Screen
Click on Maps(17) Lower LHS.
Search for & click Google Static Maps API - Enable it if needed:
Google Static Maps API
You may also need to enable Google Maps Javascript API (same process as for Static Maps:
Google Maps Javascript API
Once that is done your maps should start appearing on your site or app.
If they don't appear on refresh you may need to:
clear your cache (Wordpress or Drupal webistes),
wait the 5 min recommended for the API to Register the enabled API's
Try enabling billing on this Google Cloud Project/Firebase Project.
I was experiencing this same issue and just received the 403 error in the console.
Copying and pasting the Static Maps URL in to the URL bar and loading it showed the following error message:
The Google Maps Platform server rejected your request. You must enable Billing on the Google Cloud Project at
https://console.cloud.google.com/project/_/billing/enable Learn more at https://developers.google.com/maps/gmp-get-started
Hope this helps!

Resources