Content delivery network - image

Is there any CDN service where we don't have to upload images manually,
It should get the image from original source (my site) site on first request and add it in storage for the next request. Note: My website in written in ASP.Net
Regards.

When setting up CDN on Windows Azure you configure it to point to a storage account. If you extend your application to save images to blob storage rather than local file system these images will be available via your CDN endpoint without any further work.

If I understand your question correctly, then the Azure CDN already does this (as does pretty much any CDN provider). You can configure the CDN to point to your website, and then when users access the CDN URL the CDN will fetch the content from your website and then cache it within the CDN storage. The next request from a user in that same CDN region will get the file directly from CDN without going to your website.
See the "Caching content from hosted services" section in this article.

If you store your image on your own origin server, you don't have to upload images manually to CDN. The CDN will get the image from your original server on first request and cache it in its edge server (not storage) for the next request.
If instead you store your image on CDN storage, you will then have to upload image to CDN storage so that it can be cached on CDN edge server upon first request.
All CDNs do the same for the above.

Related

Pattern for REST API with Image

I am in process of creating a REST API with image upload/retrieval capabiilty.
Instead of sending image data to server, for it to upload to the storage.
I am thinking of doing the following:
client directly uploads image to the storage (Azure Blob Storage)
obtain image url from the blob storage if upload is successful
send image metadata along with the image url in blob storage to Server to be maintained
Is this an acceptable approach in terms of managing image data (or videos or any non string data) through Rest API?
Also, what are some of pros/cons for setting up service this way?
There's nothing preventing you from doing it that way, but it introduces a bit of unnecessary complexity:
The client needs to be aware of different endpoints to handle this particular type of request.
If something changes in your Azure Blob Storage endpoint, you have to change the client code. And if you have users using an old cached version of the app, they may get odd errors.
Your client has to be carefully implemented to handle the process of first uploading the image to Azure and then sending the URL to the API. If the user refreshes, clicks the upload button again, or if there's a network issue, you will face complicated scenarios.
My recommendation is that you can encapsulate this complexity in the server, where you have better control of what's going on, by letting the client send a POST request with multipart/form-data MIME type. The server can respond to this with details about the endpoint for the image in the server.

laravel login does not work with cloudFront AWS and Certificate Manager

I have an application built on laravel. I needed to enable https on my system and I used the cloudfront and Certificate Manager.
I was able to configure everything! Except that the laravel authentication system stopped working. Apparently the session in laravel does not work with cloudFront (CDN).
The system shows no errors. It simply does not authenticate the user.
I suspect the reason is the cloudFront. Because the cloudFront is between the browser and the EC2 server. Anyone know if there is a laravel authentication problem with cloudFront and Certificate Manager
my system: https://loja2.softshop.com.br/login
credentials:
login: teste#sandbox.pagseguro.com.br
password: tim140
the laravel validation also does not show the error messages.
For web distributions, you can choose whether you want CloudFront to forward cookies to your origin and to cache separate versions of your objects based on cookie values in viewer requests.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Cookies.html
By default, no cookies are forwarded by CloudFront. Since most web sites providing any kind of dynamic content use cookies for managing state and authentication, the default configuration usually needs to be modified for dynamic sites.
Note the caveats on the same page of the documentation -- you generally only want to forward cookies to your origin on requests where the origin actually needs to them, so you may want to create separate Cache Behaviors without cookies enabled for static resources, in order to maintain a reasonable cache hit ratio for those static resources.

How to make JS files stored in Github repository deployed to Heroku served via Amazon CDN

I am having a github repository auto deployed to Heroku. Basically what I want to do is to serve all the static files in the repository using Amazon Cloudfront CDN.. Do I need to pull my github repository to Amazon S3 and how to achieve the CDN served files while my repository being auto-deployed to Heroku..
Right now all the static assets are being served from Heroku as there is no CDN involved as of now..
I want to serve the static files via CDN while still retaining the auto deploy feature of Heroku ...
How can I achieve the above said functionality?
Do I need to upload the static assets individually to S3?
What you need to do is create a CloudFront Origin, then update the DNS for your website to point a CNAME record to your CloudFront origin instead of your Heroku application.
So, right now, let's say your Heroku app is called: my-app.
This means that if you view your app on Heroku, you will go to http://my-app.herokuapp.com
If you then create a CloudFront origin and point it to my-app.herokuapp.com, you will get a new CloudFront domain. Something like myid.cloudfront.net
So what you can do next is update the DNS for your website (www.my-app.com) to and create a CNAME record such that
www.my-app.com -> my-id.cloudfront.net
This will make all requests to your website go through CloudFront first -- this way the CloudFront CDN will have a chance to cache your static assets properly and speed it up after the first request.
You can then configure CloudFront to set the cache times for these assets.

upload files directly to amazon s3 using fineuploader

I am trying upload files to directly to s3 but as per my research its need server side code or dependency on facebook,google etc. is there any way to upload files directly to amazon using fineuploder only?
There are three ways to upload files directly to S3 using Fine Uploader:
Allow Fine Uploader S3 to send a small request to your server before each API call it makes to S3. In this request, your server will respond with a signature that Fine Uploader needs to make the request. This signatures ensures the integrity of the request, and requires you to use your secret key, which should not be exposed client-side. This is discussed here: http://blog.fineuploader.com/2013/08/16/fine-uploader-s3-upload-directly-to-amazon-s3-from-your-browser/.
Ask Fine Uploader to sign all requests client-side. This is a good option if you don't want Fine Uploader to make any requests to your server at all. However, it is critical that you don't simply hardcode your AWS secret key. Again, this key should be kept a secret. By utilizing an identity provider such as Facebook, Google, or Amazon, you can request very limited and temporary credentials which are fed to Fine Uploader. It then uses these credentials to submit requests to S3. You can read more about this here: http://blog.fineuploader.com/2014/01/15/uploads-without-any-server-code/.
The third way to upload files directly to S3 using Fine Uploader is to either generate temporary security credentials yourself when you create a Fine Uploader instance, or simply hard-code them in your client-side code. I would suggest you not hard-code security credentials.
Yes, with fine uploader you can do.Here is a link that explains very well what you need to do http://blog.fineuploader.com/2013/08/16/fine-uploader-s3-upload-directly-to-amazon-s3-from-your-browser/
Here is what you need. In this blogpost fineuploader team introduces serverless s3 upload via javascript. http://blog.fineuploader.com/2014/01/15/uploads-without-any-server-code/

Is there a way to serve s3 files directly to the user, with a url that cant be shared?

I'm storing some files for a website on S3. Currently, when a user needs a file, I create a signed url (query string authentication) that expires and send that to their browser. However they can then share this url with others before the expiration.
What I want is some sort of authentication that ensures that the url will only work from the authenticated users browser.
I have implemented a way to do this by using my server as a relay between amazon and the user, but would prefer to point the users directly to amazon.
Is there a way to have a session cookie of some sort created in the users browser, and then have amazon expect that session cookie before serving files?
That's not possible with S3 alone, but CloudFront provides this feature. Take a look at this chapter in the documentation: Using a Signed URL to Serve Private Content.

Resources