Nodejs and Amazon S3 friendship for a photo sharing application - performance

We are currently developing a service to share photos about people's interests and we are using below technologies. (newbies btw.)
for backend
Nodejs
MongoDb
Amazon S3
for frontend
ios
android
web (Angularjs)
Storing and serving images is a big deal for our service (it must be fast). We are thinking about performance issues. We stored photos on mongodb first but we changed it into aws S3 then.
So,
1.our clients can upload images from the app(s)
2.we are handling these images on nodejs and sending them to the aws S3 storage
3.s3 sends an url back to us
4.we save the url into the user's related post
5.so, when user wants to see his photos the app gets the photos from with their urls
6.finally we are getting images from S3 to the user directly.
Is this a good way to handle the situation? or is there a best way to do it?
Thanks

Related

Allowing users of laravel app to connect their own amazon aws s3, or DO spaces S3 to the app to serve files - possible?

first post here although i've been reading for about 7 years :). Please take it easy on me if there are things that are wrong with my post.
I am working on a social network/marketplace for online education. The business problem i am dealing with is how to delegate the cost of hosting (most importantly video content) to the vendors themselves, in order to simplify the platform's relationship with them.
As an example, Patreon.com integrates with vimeo to upload videos from patreon website straight to vimeo, thus the platform is not incurring the cost of hosting creator videos.
My question is if it would be possible to do the same thing with amazon s3 or digital ocean s3. Instead of our app (laravel+vue) to have it's own hosting on amazon s3, each user of our app would be able to connect their own s3 drives, access his buckets/objects and share them on our platform. That way they can pay their own bills for hosting and we don't have to deal with "re-selling" hosting.
Has anyone gone the route of allowing their app users to enter their own S3 hosting credentials, store them in the db, and when retrieving the file, setting those credentials on the fly, as ml59 suggested, and accessing the user's s3 files.
Thanks,

How to pipe upload stream of large files to Azure Blob storage via Node.js app on Azure Websites?

I am building a Video service using Azure Media Services and Node.js
Everything went semi-fine till now, however when I tried to deploy the app to Azure Web Apps for hosting, any large files fail with 404.13 error.
Yes I know about maxAllowedContentLength and not, that is NOT a solution. It only goes up to 4GB, which is pathetic - even HDR environment maps can easily exceed that amount these days. I need to enable users to upload files up to 150GB in size. However when azure web apps recieves a multipart request it appears to buffer it into memory until a certain threshold of either bytes or just seconds (upon hitting which, it returns me a 404.13 or a 502 if my connection is slow) BEFORE running any of my server logic.
I tried Transfer-Encoding: chunked header in the server code, but even if that would help, since Web Apps doesn't let the code run, that doesn't actually matter.
For the record: I am using Sails.js at backend and Skipper is handling the stream piping to Azure Blob Service. Localhost obviously works just fine regardless of file size. I made a duplicate of this question on MSDN forums, but those are as slow as always. You can go there to see what I have found so far: enter link description here
Clientside I am using Ajax FormData to serialize the fields (one text field and one file) and send them, using the progress even to track upload progress.
Is there ANY way to make this work? I just want it to let my serverside logic handle the data stream, without buffering the bloody thing.
Rather than running all this data through your web application, you would be better off having your clients upload directly to a container in your Azure blob storage account.
You will need to enable CORS on your Azure Storage account to support this. Then, in your web application, when a user needs to upload data you would instead generate a SAS token for the storage account container you want the client to upload to and return that to the client. The client would then use the SAS token to upload the file into your storage account.
On the back-end, you could fire off a web job to do whatever processing you need to do on the file after it's been uploaded.
Further details and sample ajax code to do this is available in this blog post from the Azure Storage team.

Upload videos to Amazon S3 using ruby with sinatra

I am building an android app which has a backend written on ruby/sinatra. The data from the android app is coming in the form of json data.
The database being used is mongodb.
I am able to catch the data on the backend. Now what I want to do is to upload a video on Amazon S3 being sent from the android app in the form of byte array.
I also want to store the video in a form of a string in the local database.
I have been using carrierwave, fog and carrierwave-mongoid gems but didn't have any luck.
These are the some blogs I followed:
https://blog.engineyard.com/2011/a-gentle-introduction-to-carrierwave/
http://www.javahabit.com/2012/06/03/saving-files-in-amazon-s3-using-carrierwave-and-fog-gem/
If someone could just guide me with how to go about it specifically with sinatra and mongodb cause that's where I am facing the main issue.
You might think about using AWS SDK for Android to directly upload to S3 so that your app server thread doesn't get stuck while an user is uploading a file. If you are using a service like Heroku you would be paying extra $$$ just because your user had a lousy connection.
However in this scenario;
Uploading to S3 should be straight forward once you have your mounting in place using carrierwave.
You should never store your video in the database as it will slow you down! DBs are not optimised for files, OSs are. Video is binary data and cannot be stored as text, you would need a blob type if you want to do this crime.
IMO, uploading to S3 is good enough as then you can use Amazon cloudfront CDN services to copy and distribute your content in a more optimised way.

Serving images directly from s3 or EC2 server in between?

I am building a website where users can upload images and later view them on different devices. I am planning to store images on S3, while my webserver will be running on EC2.
Now, I have a doubt - whether to serve images directly from S3 to client (browser, app etc) or serve them through my webserver in between.
If I serve directly through S3, then webserver will be less loaded but I need to authenticate requests directly going to S3 (as only a user should be able to view his/her images).
Similarly, should I upload images directly to S3 without bringing my webserver in between?
In which case it will be expensive (band-width utilization etc) ?
thanks!

How should you handle slow requests (image upload from mobile) on Heroku?

I have a server side API running on Heroku for one of my iOS apps, implemented as a Ruby Rack app (Sinatra). One of the main things the app does is upload images, which the API then processes for meta info like size and type and then stores in S3. What's the best way to handle this scenario on Heroku since these requests can be very slow as users can be on 3G (or worse)?
Your best option is to upload the images directly to Amazon S3 and then have it ping you with the details of what was uploaded.
https://devcenter.heroku.com/articles/s3#file-uploads

Resources