Is there a way to serve s3 files directly to the user, with a url that cant be shared? - session

I'm storing some files for a website on S3. Currently, when a user needs a file, I create a signed url (query string authentication) that expires and send that to their browser. However they can then share this url with others before the expiration.
What I want is some sort of authentication that ensures that the url will only work from the authenticated users browser.
I have implemented a way to do this by using my server as a relay between amazon and the user, but would prefer to point the users directly to amazon.
Is there a way to have a session cookie of some sort created in the users browser, and then have amazon expect that session cookie before serving files?

That's not possible with S3 alone, but CloudFront provides this feature. Take a look at this chapter in the documentation: Using a Signed URL to Serve Private Content.

Related

Hosting uploads on amazon s3 in a private bucket, accessing url's from within Laravel

I'm using a s3 bucket for my application's user's uploads. This bucket is private.
When i use the following code the generated url is not accessible from within the application:
return Storage::disk('s3')->url($this->path);
I can solve this by generating a temporary url, this is accessible:
return Storage::disk('s3')->temporaryUrl($this->path, Carbon::now()->addMinutes(10));
Is this the only way to do this? Or are there other alternatives?
When objects are private in Amazon S3, they cannot be accessed by an "anonymous" URL. This is what makes them private.
An objects can be accessed via an AWS API call from within your application if the IAM credentials associated with the application have permission to access the object.
If you wish to make the object accessible via a URL in a web browser (eg as the page URL or when referencing within a tag such as <img>), then you will need to create an Amazon S3 pre-signed URLs, which provides time-limited access to a private object. The URL includes authorization information.
While I don't know Laravel, it would appear that your first code sample is just provide a normal "anonymous" URL to the object in Amazon S3 and is therefore (correctly) failing. Your second code sample is apparently generating a pre-signed URL, which will work for the given time period. This is the correct way for making a URL that you can use in the browser.

laravel login does not work with cloudFront AWS and Certificate Manager

I have an application built on laravel. I needed to enable https on my system and I used the cloudfront and Certificate Manager.
I was able to configure everything! Except that the laravel authentication system stopped working. Apparently the session in laravel does not work with cloudFront (CDN).
The system shows no errors. It simply does not authenticate the user.
I suspect the reason is the cloudFront. Because the cloudFront is between the browser and the EC2 server. Anyone know if there is a laravel authentication problem with cloudFront and Certificate Manager
my system: https://loja2.softshop.com.br/login
credentials:
login: teste#sandbox.pagseguro.com.br
password: tim140
the laravel validation also does not show the error messages.
For web distributions, you can choose whether you want CloudFront to forward cookies to your origin and to cache separate versions of your objects based on cookie values in viewer requests.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Cookies.html
By default, no cookies are forwarded by CloudFront. Since most web sites providing any kind of dynamic content use cookies for managing state and authentication, the default configuration usually needs to be modified for dynamic sites.
Note the caveats on the same page of the documentation -- you generally only want to forward cookies to your origin on requests where the origin actually needs to them, so you may want to create separate Cache Behaviors without cookies enabled for static resources, in order to maintain a reasonable cache hit ratio for those static resources.

How should I load images if I use token-based authentication

I have a client-side application on domain client-domain.example and a server-side application on domain server-domain.example. There is an API on the server-side. The client-side application sends AJAX requests to the server-side application. I use token-based authentication, so the client-side application sends token in headers with each AJAX request, for example: "Authorization: Bearer {some token}". It works fine with AJAX requests, when I need to get or post some data.
But the server-side API also keeps files. For example images. The files are private, only authenticated users can get them. And I need to show this images on the client-side in <img> tag. I can't get them using <img src="http://server-domain.example/path/to/image"> because in this case browser will not send Authorization header to the server-side.
What is the adopted solution? How client applications load images from server-side API?
There are three methods to solve it, the best approach to solve it is using the signed URLs
1. signed URL (can be insecure)
The first method simply creates a route without authentication (anonymous access) with a signature hash parameter that indicates if the resource can be loaded or not.
<img src="http://server-domain.example/path/to/image?guid=f6fc84c9f21c24907d6bee6eec38cabab5fa9a7be8c4a7827fe9e56f2">
When the server receives the request it must validate the guid if the expiration time has not been reached and, of course, check if the guid has a valid signature.
This approach is used by several files/documents servers like Dropbox, S3, CDN providers, etc.
See the technique in some companies.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-signed-urls.html#private-content-overview-choosing-duration
https://client.cdn77.example/support/knowledgebase/cdn-resource/how-do-i-set-up-signed-urls
SECURITY:
the guid can not be just UUID of the image/user, because this doesn't provide any protection.
the guid can not be the same token you use for authentication (for example, you can't use auth-JWT tokens), because the user can share the link - and the user will share his tokens (see also (2)).
as mentioned above: guid should have a server-side mechanism of validation (date/signature/...) and should not provide more permissions than "access to the requested file"
2 Query String with JWT (most probably a security breach)
The second method is to pass the token by querystring with the image URL.
This method is not recommendable because it exposes clearly the URL and many servers sometimes write and expose public logs of URL accessed. The bad notice is that the JWT exposed normally the user can get control a lot of features further the image load.
<img src="http://server-domain.example/path/to/image?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c">
When the server receives the request you must validate the token by querystring and response with the content.
SECURITY NOTES: worse than (1) - because now authentication info (JWT auth) is exposed in the URL and can be cached/logged by servers OR accessed by any server in the middle OR the user can simply share the "image link" with their colleagues.
But if JWT is NOT an access token, but a one-time token generated specifically for accessing that particular file in a form of JWT then it provides the same level of security as (1).
3. cookies
The third method creates an authenticated cookie to validate the access of the image.
This method is not recommendable because is out of API pattern (webapi/token based authentication in general).
When the server receives the request you need to validate if the validate cookie is valid.
SECURITY NOTES: if you can provide security for your cookies and XSS and CSRF — are not just letters for you then it is a solution. But keep in mind: cookies are sent by the browser automatically with each request. Much more information about possible threats and solutions: Where to store JWT in browser? How to protect against CSRF?
My solution to basically this exact same problem, based on Jeferson Tenorio's answer below (option 1), was to sign the URL to my API call with an encryption of the image and the user's JWT token, e.g. path/to/image?token=xxxx. In laravel this is easily accomplished with encrypt($your_object) and decrypt($token) (https://laravel.com/docs/5.7/encryption), and then I used the extracted token to verify the user had access to the file in question. But there are probably many other libraries capable of handling this.
I would be curious if there are any security concerns, but from my perspective the JWT is never exposed via plain text and the encryption relies on a secret key that malicious actors shouldn't have access to, so it seems like it should be fairly secure. My only real complaint is that the token is quite long using this method which does not make for presentable URLs.

How can I get the cached credentials in application startup on XDK platform?

I have stored the login detials into a cache file by using these lines in the login process.
intel.xdk.cache.setCookie("userid",username,50);
intel.xdk.cache.setCookie("password",password,50);
I want the app to remember credentials so I thought somehow I have to get them while in init-app.js file and forward to content page .
Which method should I use to forward to specific page in js by passing the index.html page?
And is this the appropriate way to do cache authentication?
Instead use localStorage.setItem("password", password); and retrieve it using localStorage.getItem("password");
Don't store passwords in localstorage. What i do,
User authenticates using username and password from app
Server authenticates the request and sends a token (Json Web Token) which is then stored in localStorage
The app will then query the User's profile using the token

upload files directly to amazon s3 using fineuploader

I am trying upload files to directly to s3 but as per my research its need server side code or dependency on facebook,google etc. is there any way to upload files directly to amazon using fineuploder only?
There are three ways to upload files directly to S3 using Fine Uploader:
Allow Fine Uploader S3 to send a small request to your server before each API call it makes to S3. In this request, your server will respond with a signature that Fine Uploader needs to make the request. This signatures ensures the integrity of the request, and requires you to use your secret key, which should not be exposed client-side. This is discussed here: http://blog.fineuploader.com/2013/08/16/fine-uploader-s3-upload-directly-to-amazon-s3-from-your-browser/.
Ask Fine Uploader to sign all requests client-side. This is a good option if you don't want Fine Uploader to make any requests to your server at all. However, it is critical that you don't simply hardcode your AWS secret key. Again, this key should be kept a secret. By utilizing an identity provider such as Facebook, Google, or Amazon, you can request very limited and temporary credentials which are fed to Fine Uploader. It then uses these credentials to submit requests to S3. You can read more about this here: http://blog.fineuploader.com/2014/01/15/uploads-without-any-server-code/.
The third way to upload files directly to S3 using Fine Uploader is to either generate temporary security credentials yourself when you create a Fine Uploader instance, or simply hard-code them in your client-side code. I would suggest you not hard-code security credentials.
Yes, with fine uploader you can do.Here is a link that explains very well what you need to do http://blog.fineuploader.com/2013/08/16/fine-uploader-s3-upload-directly-to-amazon-s3-from-your-browser/
Here is what you need. In this blogpost fineuploader team introduces serverless s3 upload via javascript. http://blog.fineuploader.com/2014/01/15/uploads-without-any-server-code/

Resources