We are storing the videos in object storage (aws s3/oci os) and using object uri's we are able to play the videos from HTML video player. but if we make the bucket access as private then possible ways are use the pre-authenticated urls or use the object storage sdk api to get the input stream for video object, stream the data using data buffers with ResourceRegion in webflux (we can handle all the authentication stuff to access private bucket data).
My query is there any better way to access the private bucket videos (content delivery & streaming)? Can we provide a proxy url instead video object uri directly to client, because I can handle some authentication & authorisation stuff on this url and will hidden the actual video object uri so that we can prevent the video downloading from any third party apps.
Kindly provide suggestions on this.
Yes, there are ways. One way is to have a proxy server route external HTTP calls. But that will have only limited features. Another option is to have custom written microservice to stream data from a private/public bucket via an HTTP endpoint with additional custom business logic.
You may refer to this sample Spring Boot microservice code to stream content from OCI Object Storage.
https://github.com/oracle-devrel/oci-sdk-java-samples/tree/main/usecases/storage-file-streaming
You can generate a new access key and secret from your s3 storage, create a small/simple service/api with node or any language of your choice, and every time your app needs a url for a video, it can send a request to the service for a new url which can have an expiration time on it.
Also, in your api you can ensure only your app can access the request for new url.
However, if you mean you want only your browser or your client's to be the only ones that can access the video then that may be difficult. From the above, you can control who can access the url, how long the url is active and who can call the api. Third parties have to do a lot bypass your restrictions
Related
In the first logic, the client uploads the file and then requests to return the response if is true I register the file to another API.
And in the second logic, the client is requested to the Register files API when the register API sends the file to storage S3 and waits for the response to return to the client after uploading to S3 storage.
My question what is the best practice for this scenario?
I am sorry for the English grammar, I am trying when type more and more to learn
I am in process of creating a REST API with image upload/retrieval capabiilty.
Instead of sending image data to server, for it to upload to the storage.
I am thinking of doing the following:
client directly uploads image to the storage (Azure Blob Storage)
obtain image url from the blob storage if upload is successful
send image metadata along with the image url in blob storage to Server to be maintained
Is this an acceptable approach in terms of managing image data (or videos or any non string data) through Rest API?
Also, what are some of pros/cons for setting up service this way?
There's nothing preventing you from doing it that way, but it introduces a bit of unnecessary complexity:
The client needs to be aware of different endpoints to handle this particular type of request.
If something changes in your Azure Blob Storage endpoint, you have to change the client code. And if you have users using an old cached version of the app, they may get odd errors.
Your client has to be carefully implemented to handle the process of first uploading the image to Azure and then sending the URL to the API. If the user refreshes, clicks the upload button again, or if there's a network issue, you will face complicated scenarios.
My recommendation is that you can encapsulate this complexity in the server, where you have better control of what's going on, by letting the client send a POST request with multipart/form-data MIME type. The server can respond to this with details about the endpoint for the image in the server.
Is there any methods to prevent web client from filling my parse cloud storage with dummy files? As far i can see, the creation of Parse.File (without object reference) requires only valid api key which is plain text in client side. Does this imply that any others can impersonate to be my valid js client with the api key or did i miss out anything in the api reference?
I am about to start working on a project, which is basically a web interface for a mobile banking application. The API is ready, I only need to provide the frontend part of the web application. I was going to make it using Backbone/Angular/Ember, but started to worry about the security.
Particularly, the following. As a rule, every API request must contain a parameter method_code, which is calculated as hash of user token, method name and secret API key. If I put the logic of how this param is calculated into one of .js files, anyone could potentially access some sensitive data using tools like Postman or even browser console. How should I go about this issue? I could have a server-side script generating the method_code for me, but is it possible to make it accessible only to my web app's requests?
every API request must contain a parameter method_code, which is calculated as hash of user token, method name and secret API key
I could have a server-side script generating the method_code for me, but is it possible to make it accessible only to my web app's requests?
Yes, the server-side script would be the way to go if you do not want to expose the secret API key within your client side code or request data.
User token can (presumably) come from the user's session cookie value? So simply have a server side method that takes the method name and then returns the method_code calculated from the secret API key (kept server side only) and the user token.
The Same Origin Policy will prevent another domain making a request to your API and retreiving the method_code. I'm also assuming the API and front-end code runs on the same domain here, although if this is not the case you can use CORS to allow your front-end code to read and retreive data client-side via the API.
You can try to generate a token based on security factors and encrypt that and use it in your requests to identify your clients and valid requests.
I'm storing some files for a website on S3. Currently, when a user needs a file, I create a signed url (query string authentication) that expires and send that to their browser. However they can then share this url with others before the expiration.
What I want is some sort of authentication that ensures that the url will only work from the authenticated users browser.
I have implemented a way to do this by using my server as a relay between amazon and the user, but would prefer to point the users directly to amazon.
Is there a way to have a session cookie of some sort created in the users browser, and then have amazon expect that session cookie before serving files?
That's not possible with S3 alone, but CloudFront provides this feature. Take a look at this chapter in the documentation: Using a Signed URL to Serve Private Content.