How to solve the risks in sharing an Amazon S3 pre-signed URL? - spring-boot

I am developing a Spring-boot REST API application. The following are the use cases
All REST API endpoints are secured using HTTPS.
All responses in this application are cached for 30 minutes based on the request.
One of my REST API response contains AWS S3 pre-signed url.
This REST API will be consumed by Mobile App
Issues
How to protect the AWS S3 pre-signed url from hackers who can snoop the request and response in mobile app.
Since I enforced the cache in my spring-boot application, I am not able to expire the pre-signed url before this cache timeout. The cached response containing the pre-signed url will be used until the cache times out. We are planning to make the cache timeout (say 5 minutes) smaller than the pre-signed url timeout (say 7 minutes)
Questions
How to protect AWS S3 pre-signed URL from hackers
How to handle this cache logic intelligently, because I don't want to re-generate this pre-signed URL often. Especially after evicting cache on list.
Is there any solution available to maintain this pre-signed url out of this spring-boot application. That is expecting something like other micro-service can handle this pre-signed url and will be used by mobile app directly.
Any response is kindly appreciated

Amazon recommends using their Server-Side Encryption with AWS KMS–Managed Keys (SSE-KMS). Here is a link to a Amazon blog series on how to accomplish using their aws-sdk for Java. This is a link to part 2 of 3 in the series.

Related

Video streaming with object storage bucket

We are storing the videos in object storage (aws s3/oci os) and using object uri's we are able to play the videos from HTML video player. but if we make the bucket access as private then possible ways are use the pre-authenticated urls or use the object storage sdk api to get the input stream for video object, stream the data using data buffers with ResourceRegion in webflux (we can handle all the authentication stuff to access private bucket data).
My query is there any better way to access the private bucket videos (content delivery & streaming)? Can we provide a proxy url instead video object uri directly to client, because I can handle some authentication & authorisation stuff on this url and will hidden the actual video object uri so that we can prevent the video downloading from any third party apps.
Kindly provide suggestions on this.
Yes, there are ways. One way is to have a proxy server route external HTTP calls. But that will have only limited features. Another option is to have custom written microservice to stream data from a private/public bucket via an HTTP endpoint with additional custom business logic.
You may refer to this sample Spring Boot microservice code to stream content from OCI Object Storage.
https://github.com/oracle-devrel/oci-sdk-java-samples/tree/main/usecases/storage-file-streaming
You can generate a new access key and secret from your s3 storage, create a small/simple service/api with node or any language of your choice, and every time your app needs a url for a video, it can send a request to the service for a new url which can have an expiration time on it.
Also, in your api you can ensure only your app can access the request for new url.
However, if you mean you want only your browser or your client's to be the only ones that can access the video then that may be difficult. From the above, you can control who can access the url, how long the url is active and who can call the api. Third parties have to do a lot bypass your restrictions

What is best practice for use 2 deffirent API in client for upload files and register in other API

In the first logic, the client uploads the file and then requests to return the response if is true I register the file to another API.
And in the second logic, the client is requested to the Register files API when the register API sends the file to storage S3 and waits for the response to return to the client after uploading to S3 storage.
My question what is the best practice for this scenario?
I am sorry for the English grammar, I am trying when type more and more to learn

Pattern for REST API with Image

I am in process of creating a REST API with image upload/retrieval capabiilty.
Instead of sending image data to server, for it to upload to the storage.
I am thinking of doing the following:
client directly uploads image to the storage (Azure Blob Storage)
obtain image url from the blob storage if upload is successful
send image metadata along with the image url in blob storage to Server to be maintained
Is this an acceptable approach in terms of managing image data (or videos or any non string data) through Rest API?
Also, what are some of pros/cons for setting up service this way?
There's nothing preventing you from doing it that way, but it introduces a bit of unnecessary complexity:
The client needs to be aware of different endpoints to handle this particular type of request.
If something changes in your Azure Blob Storage endpoint, you have to change the client code. And if you have users using an old cached version of the app, they may get odd errors.
Your client has to be carefully implemented to handle the process of first uploading the image to Azure and then sending the URL to the API. If the user refreshes, clicks the upload button again, or if there's a network issue, you will face complicated scenarios.
My recommendation is that you can encapsulate this complexity in the server, where you have better control of what's going on, by letting the client send a POST request with multipart/form-data MIME type. The server can respond to this with details about the endpoint for the image in the server.

upload files directly to amazon s3 using fineuploader

I am trying upload files to directly to s3 but as per my research its need server side code or dependency on facebook,google etc. is there any way to upload files directly to amazon using fineuploder only?
There are three ways to upload files directly to S3 using Fine Uploader:
Allow Fine Uploader S3 to send a small request to your server before each API call it makes to S3. In this request, your server will respond with a signature that Fine Uploader needs to make the request. This signatures ensures the integrity of the request, and requires you to use your secret key, which should not be exposed client-side. This is discussed here: http://blog.fineuploader.com/2013/08/16/fine-uploader-s3-upload-directly-to-amazon-s3-from-your-browser/.
Ask Fine Uploader to sign all requests client-side. This is a good option if you don't want Fine Uploader to make any requests to your server at all. However, it is critical that you don't simply hardcode your AWS secret key. Again, this key should be kept a secret. By utilizing an identity provider such as Facebook, Google, or Amazon, you can request very limited and temporary credentials which are fed to Fine Uploader. It then uses these credentials to submit requests to S3. You can read more about this here: http://blog.fineuploader.com/2014/01/15/uploads-without-any-server-code/.
The third way to upload files directly to S3 using Fine Uploader is to either generate temporary security credentials yourself when you create a Fine Uploader instance, or simply hard-code them in your client-side code. I would suggest you not hard-code security credentials.
Yes, with fine uploader you can do.Here is a link that explains very well what you need to do http://blog.fineuploader.com/2013/08/16/fine-uploader-s3-upload-directly-to-amazon-s3-from-your-browser/
Here is what you need. In this blogpost fineuploader team introduces serverless s3 upload via javascript. http://blog.fineuploader.com/2014/01/15/uploads-without-any-server-code/

Is there a way to serve s3 files directly to the user, with a url that cant be shared?

I'm storing some files for a website on S3. Currently, when a user needs a file, I create a signed url (query string authentication) that expires and send that to their browser. However they can then share this url with others before the expiration.
What I want is some sort of authentication that ensures that the url will only work from the authenticated users browser.
I have implemented a way to do this by using my server as a relay between amazon and the user, but would prefer to point the users directly to amazon.
Is there a way to have a session cookie of some sort created in the users browser, and then have amazon expect that session cookie before serving files?
That's not possible with S3 alone, but CloudFront provides this feature. Take a look at this chapter in the documentation: Using a Signed URL to Serve Private Content.

Resources