I'm giving users the ability to attach images, videos, audio clips and other file attachments inside an existing web application. Some installs of our product have thousands of users so the volume of data will get very high.
Amazon S3 is the obvious solution but, due to legal reasons, cannot always be used. I need a solution that my customers can host themselves.
I'm therefore looking to build or adopt a file storage system with the following traits:
Not hosted. Must be installable on my customers' Windows servers.
Horizontally scalable to terabytes of storage.
Similar operation to S3 such that I can make both approaches part of my product.
Proven architecture
I've seen several suggestions for this on StackOverflow and other forums (Eucalyptus Walrus, Hadoop HDFS, MongoDB + GridFS, CouchDB, MogileFS) but couldn't find enough information to identify one as simple and proven.
I have experience with CouchDB and would jump on it if I could be sure that it'll do well with terabytes of video files but I haven't found a good success story to lean on.
The closest open source project is OpenStack swift (https://github.com/openstack/swift). It power's RackSpace CloudFiles.
While S3 design is not exactly known OpenStack is built to give very similar functionality.
It is scale-out, have no single point of failure or bottlenecks.
Related
I'm a newbie web developer and I have a basic question regarding my Laravel based website: Where should I put my files? I know there are services like Amazon S3, but firstly I don't know how to work with them, and second they are NOT FREE.
There is going to be a fairly large amount of data including pics and videos (around 10 GB).where should I store them? And how should I use Laravel to allow users to upload files?
If it will be a bigger project, you should use a cloud service. This is going to be the future of backend development as it is making your project much easier and faster to mantain and run. If you want to make your own backend, this will take a long time to get it done, since you have to learn a lot of new things and should be good at it. There would be many key aspects you have to be aware of. Like securitiy, scaling, performance and so on ... Like you suggested Amazon AWS or imo much better Google Firebase. I think Google Firebase should be your pick because it is really easy do understand and has a great documentation. Next to the storing service (Google Cloud Storage) there are many several services you could use in the future like analytics, machine learning or nosql databases. And the good thing is that you can connect them all together.
With Google Firebase you have a Free Spark Plan which is completely free with some limitations. And if you scale to many users you can upgrade to the other plans, which is not very expensive. Don't forget that your own Back-End would cost you time and also money for the electricity and hardware cost.
If you have more questions be free to ask me :)
I will make a project in the not too distant future, a project where we will be storing thousands of thousands of images in the course of time. I'm on a hard decision whether to use Amazon S3 or EFS to store those images. Both I think are a very good option, but my question goes to what would be the best service or what would be the best practice?
My application will be done with Laravel and I already did the integration of both services.
Most of the characteristics of the project are:
Most of the files I will store will be photos about 95%.
Approximately 1.5k photos would be stored daily.
The photos are very large (professional cameras).
Traffic to the application will not be much, approx. 100 users at a time.
Each user would consult about 100 photos per day.
What do you recommend?
S3 is absolutely the right answer and practice. I have built numerous applications like you describe, some with 100s of millions of images, and S3 is superior. It also allows for flexibility such as your API returning the images as pre-signed URLs which will reduce load to your servers, images can be linked directly via static web hosting, and it provides lifecycle policies to archive less used data. Additionally, further integration with other AWS services is easy using event triggers.
As for storing/uploading, S3 multi-part upload is very useful to both increase performance and increase reliability.
EFS would make sense for your type of scenario if you were doing some intensive processing where you had a cluster of severs that needed lower latency with a shared file system - think HPC. EFS would also come at a higher cost and doesn't provide as many extensibility options or built-in features as S3. Your scenario doesn't sound like it requires EFS.
http://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html
http://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html
http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html
http://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html
http://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html
For the scenario you proposed AWS S3 is the choice. Why?
Since images are more often added, it costs roughly 1/10 th of EFS.
Less overhead on your web servers since files can be directly uploaded and downloaded with S3.
You can leverage event driven processing with Lambda e.g Generating thumbnail, Image processing filters by S3 Lambda trigger.
Higher level of SLA for availability and durability.
Supporting for inbuilt lifecycle management to archival and reduce cost.
AWS EFS can also be an option if it happens to frequently modify the images (Where EBS is also an option)
You can also consider using AWS CloudFront with either the option to cache images.
Note: At the end its not about using a single service. Based on your upcoming requrements you can choose either one of them or both.
I'm looking at building a somewhat complex log handling system to replace an old ad-hoc setup and could use a bit of advice. I'm pretty familiar with SQL databases and networking, but am very new to NoSQL stores, which seem to be the key to solving this mess. Note that we have a very good team, but a limited licensing budget, so free/open-source options are vastly preferred. (That said, availability of support if something goes pear-shaped would be nice.)
Requirements:
Archive (test) logs generated in the several GB/day range at multiple sites around the world.
Provide full text search of those logs at each site fairly instantaneous for debugging purposes.
Push that archived data back to a central location (though a replica at each site would be absolutely okay).
Provide for analytics of that data back at the central location.
Constraints:
The sites have fairly crap Internet connections for the moment (high latency and fairly low bandwidth). Much of the data is generated during the day and a good portion of the sync would have to lag behind and finish overnight each day.
Sites MUST be able to function if the WAN goes completely off-line.
Extras
The log data is (as usual) highly compressible. Any solution that compresses data transacting from node to node across the WAN is preferred.
Many log files are related to each other in multi-level hierarchies, and that relationship is very important and must be maintained!
Sites will generally not modify the same data or modify it again once stored. This is all archival for the most part.
We can either stream as the logs are generated or push blocks of logs. Streaming is preferred, as it would simplify things considerably.
Options I'm aware of:
Local MySQL and folder structure for logging and local configuration management.
This is what we have now and it's running, but not a long-term solution by any means.
Elasticsearch
I've read that ElasticSearch would probably be really good for this, though from what I understand that doesn't support multi-site.
Cassandra
This seems to have built-in multi-site support, but I'm not exactly familiar with the data-model. Is this a good choice for something like this, or will I hate myself if I give it a try?
CouchDB
This is a document store that seems(?) like a good match for log data, but again doesn't appear to have multi-site support.
Apache Kafka
I read up on this, but I haven't quite wrapped my head around it yet...
Questions:
Do any of these actually let you stream-append logs or are they best suited to dumping completed files in?
Is there a solution I'm missing that might be better?
Any recommendations on multi-site with some of the options that don't support multi-site by themselves?
Interesting links:
https://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer-should-know-about-real-time-datas-unifying
http://blog.cloudera.com/blog/2015/07/deploying-apache-kafka-a-practical-faq/
https://www.elastic.co/blog/scaling_elasticsearch_across_data_centers_with_kafka
https://kafka.apache.org/08/ops.html
https://github.com/Stratio/cassandra-lucene-index
I may be a bit biased, since Couchbase is my employer, but this sounds like the kind of problem that XDCR (Cross Datacenter Replication) was made to solve.
You could stand up a cluster on multiple geographical sites (Couchbase calls these "datacenters") and then XDCR would automatically replicate (bidirectionally) the data between sites. If I understand your requirements correctly, this sounds like just what you need.
I want to build a torrent server on a group of Amazon EC2 machines. This is for completely internal usage. Essentially, I want a fast way to propagate changes to large files that need to reside on multiple servers. How can I do it?
This idea has been explored already, with the most prominent and mature implementation I'm aware of being Twitter's Murder:
Murder is a method of using Bittorrent to distribute files to a large
amount of servers within a production environment. This allows for
scaleable and fast deploys in environments of hundreds to tens of
thousands of servers where centralized distribution systems wouldn't
otherwise function. A "Murder" is normally used to refer to a flock of
crows, which in this case applies to a bunch of servers doing
something.
The Twitter engineering blog features a dedicated article introducing Murder: Fast datacenter code deploys using BitTorrent, which also provides a video of a respective presentation/talk by Larry Gadea.
I am developing a eCommerce website in ASP.NET MVC 3 in C#. Using SQL Server 2008R2. My question is if I have 5 images that I want to show in gridView with thumbnails (e.g. something like Amazon website that gives customers couple of pictures to show) would it be advisory if the images are coming from the database or should I reside in the Content\Images folder? There are quite a few sub-categories in sub-category in my db design. What is the most common suit for a professional developer to follow? Thanks. I know there are few options for third party tools like jquery & Telerik Extensions. So I will use them.
Thanks
From my experience and research it is better to put it in a folder/content structure. Yes, there are security things with opening directories to the public but if you instead upload a file via ftp dynamically the problems are solved. I have heard of horror stories about storing files in database and have seen the issues come up but have resolved them. Basically, it is easier to write to database and there are not the security issues of opening up a directory to public but just make sure to regularly check backups that the files are not corrupt or make sure the data is on a fail over cluster where that will never be a problem.
So summary: Database is fine just regularly check backups by restoring them that they are not corrupt or run as a fail over cluster. Otherwise just go with the typical folder/content structure but use ftp to upload the file so there are no open directories to the public.
For me, the best anwser to this question is this: To BLOB or Not To BLOB: Large Object Storage in a Database or a Filesystem
Sumary: Application designers often face the question of whether to store large objects in a filesystem or in a database. Often this decision is made for application design simplicity. Sometimes, performance measurements are also used. This paper looks at the question of fragmentation – one of the operational issues that can affect the performance and/or manageability of the system as deployed long term. As expected from the common wisdom, objects smaller than 256K are best stored in a database while objects larger than 1M are best stored in the filesystem. Between 256K and 1M, the read:write ratio and rate of object overwrite or replacement are important factors. We used the notion of “storage age” or number of object overwrites as way of normalizing wall clock time. Storage age allows our results or similar such results to be applied across a number of read:write ratios and object replacement rates.