I am building a website where users can upload images and later view them on different devices. I am planning to store images on S3, while my webserver will be running on EC2.
Now, I have a doubt - whether to serve images directly from S3 to client (browser, app etc) or serve them through my webserver in between.
If I serve directly through S3, then webserver will be less loaded but I need to authenticate requests directly going to S3 (as only a user should be able to view his/her images).
Similarly, should I upload images directly to S3 without bringing my webserver in between?
In which case it will be expensive (band-width utilization etc) ?
thanks!
Related
I have a question: I am looking to deploy my app to Laravel Forge. Now my app does save images locally on the server, but when I scale my app and add load balancers, I think saving images on different servers isn't a good practice.
Because when one server goes down, some of the images wont load for example.
How could I do this in a good manner?
I have an Umbraco 8.2.2 application which is hosted in AWS EC2 server.
Recently, I encounter server availability issues that caused downtimes once in a while.
One of the solutions I've thought about is to maintain an additional AWS EC2 server which hosts the same application (Same code, same database) and configure load balancing between them.
It will host both client and server.
To what extent is this possible, in your experience?
How can I handle obstacles like shared media & cache folders, as they should be the same?
I've heard about S3 as an option.
What additional obstacles may I face, and what should I put my focus on?
Thanks.
This sounds like a good use case for Amazon EFS which offers you a shared POSIX file system. You can mount the directories where your media and cache folders are located to the EFS share and then mount the EFS share to the backend EC2 instances that are behind the load balancer. This solution requires very little or no changes to your application itself, you will just be changing the storage media for certain files in your application.
As for obstacles, EFS is a network filesystem, therefore, it is generally not recommended to execute code from your EFS share or to use it for applications that require very low storage latency. If that's the case then you can consider Amazon FSx but that's a very expensive solution.
If you can't avoid executing your code from EFS, just try it out and see how it affects your application's performance. EFS works fine for plenty of web application use cases. Here is a tutorial on how to host a simple website using EFS behind a load balanced environment to get you started.
If EFS is not an option, then you could try to offload your static content to Amazon S3 and serve it through CloudFront. This is probably a cheaper option and offloads a lot of traffic from your load balancer and EC2 instances but it is also probably more work because you have to refractor your application to serve your content through CloudFront. Here is a tutorial (there are plenty more online) on how to create a static website that serves content through CloudFront. In your case, you would be serving the content (i.e. your media files) through S3/CloudFront and then update the links used in your applications to retrieve that content from the CloudFront endpoint instead of retrieving them directly from your application/load balancer endpoint... so the work you need to do is on two fronts, setting up the S3/CloudFront environment, and configuring your application to offload the content to S3 and serve it through CloudFront.
We are currently developing a service to share photos about people's interests and we are using below technologies. (newbies btw.)
for backend
Nodejs
MongoDb
Amazon S3
for frontend
ios
android
web (Angularjs)
Storing and serving images is a big deal for our service (it must be fast). We are thinking about performance issues. We stored photos on mongodb first but we changed it into aws S3 then.
So,
1.our clients can upload images from the app(s)
2.we are handling these images on nodejs and sending them to the aws S3 storage
3.s3 sends an url back to us
4.we save the url into the user's related post
5.so, when user wants to see his photos the app gets the photos from with their urls
6.finally we are getting images from S3 to the user directly.
Is this a good way to handle the situation? or is there a best way to do it?
Thanks
I have a server side API running on Heroku for one of my iOS apps, implemented as a Ruby Rack app (Sinatra). One of the main things the app does is upload images, which the API then processes for meta info like size and type and then stores in S3. What's the best way to handle this scenario on Heroku since these requests can be very slow as users can be on 3G (or worse)?
Your best option is to upload the images directly to Amazon S3 and then have it ping you with the details of what was uploaded.
https://devcenter.heroku.com/articles/s3#file-uploads
I am setting up a website that uses Amazon EC2 as Webserver, EBS to store the data of the website and another instance of Wowza for Video on Demand streaming and it seems hard to find answers for my questions:
If my webserver instance is terminated, am I loosing the apache settings/modules on my ec2 instance as well?
Can I have mysql on my webserver instance to save cost of RDS or is it a bad idea?
If I am using RDS for the database, is it deployed on edge locations also (like what cloudfront is doing)?
If have a Wowza Media server instance also running on ec2, can Wowza make use of Cloudfront as well, just so if somebody from somewhere in the world will get a VOD stream right from the next edge location?
Thanks!
If your instance is terminated, the instance and the EBS storage associated with that instance is lost. If you want to remove an instance without losing the state of that server, create an image (AMI) of the server before terminating it.
Depends on how much IT admin you want. The whole idea of Amazon is it takes this admin away from you for a cost higher than if you do it yourself. They do the backing up of databases and load balancing for you (that's the trickier part). That said, Amazon isn't fail-proof, you have to do backups outside of the Amazon system for everything, I've had instances crash and trash the disk, it does happen.
Their database instances are deployed in their main data center locations, which is different to their cloud distribution locations. Having the servers and databases in the same zone will save you network costs.
To use cloudfront you first create one (obviously) and then use the cloudfront domain instead of your own, the cloudfront cache maps through to your domain and caches at the edge location. If it the content is accessible from the server, it's possible to put a cloudfront in front of it. Note that cloudfront charges you not only slightly more than for traffic directly from your server, it also charges to get traffic from your server to the edge locations as instance traffic cost and you'll be charged per 10,000 requests aside from bandwidth (larger content works out cheaper per MB).
It's also possible to map your domain to the cloudfront URL if you want a pretty looking domain.