Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
Currently we use Oracle for storing images in the application. But we expect to see lot of images/videos in the application. We would like to move away from oracle to be able to shard easily and achieve high throughput. Any recommendations?
Did anyone try using NoSQL databases such as Couchbase/MongoDB for this purpose? Are they optimized for this purpose.
I see that Cloudinary uses Amazon S3 for this purpose. But I am looking for something, which can be deployed in our datacenter for privacy concerns.
From your problem description, I can't see any indication pro or contra a NoSQL database.
Having media like pictures, sound, or video, in a database means just having a large uninterpreted binary object. Uninterpreted means: The database can store and deliver the binary, but can't analyze it for its properties, take it as a basis for queries, and the like (what databases are made for).
Both relational and non-relational databases provide data types for that kind of BLOB. The features in which they differ are, for example,
tabular vs. tree structured data structures - not applicable for the BLOB, as it will be one attribute, no matter how large it becomes,
different sorts of transaction logic (CAP theorem) that aren't addressed by the BLOB subject matter.
So I'm afraid your architecture will need to be decided on a much broader range than just considering your media data. Which are your data structures? Which are your query and update scenarios?
What I see people do with Couchbase is store all of the meta-data about the image in a JSON document in Couchbase, but host the image itself is something optimized for files. You get the benefits of both worlds. In this kind of use case you mention, from my experience a NoSQL database will be much better than a relational database.
Having managed very large relational and NoSQL databases with blobs in them, IMO it is a terrible idea in most cases, regardless of the database type. So I wrote up this blog post for just such a situation.
As you are looking for private deployment in your data center, you may consider MongoDB or OpenStack Swift.
I have seen people using MongoDB gridfs (https://docs.mongodb.com/manual/core/gridfs/) for storing images/videos.
The advantages of using MongoDB gridfs:
You can use MongoDB replica set for fault tolerance/high availability.
You can access a portion of a large file without loading the whole file into memory. As MongoDB stores files into small chunks(255KB), so video files can be streamed faster.
You can scale using MongoDB sharding.
Openstack Swift is a highly available, distributed, eventually consistent object/blob store comparable to Amazon S3, which you can deploy in your data center.
Also OpenStack Swift is being used by many companies, Rackspace's Cloud Files runs Swift. You may also give a look into Swift :
http://docs.openstack.org/developer/swift/
S3 which has a very strong commitment to privacy. What are your concerns regarding S3? Also, which datacenter are you planning to move away from Oracle's storage?
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I'm working on an game application where I need real-time data for leaderboards I'm building. I've read a bunch of stackoverflows and company blogs- but honestly, I'm not sure which one best fits my use case. I am using DynamoDB to record players' recent moves, and the history of moves are in kafka. I am looking to stream data from these two data sources into a database and my leaderboard-service can then query the database to render the contents of each leaderboard. My data velocity is modest (1K game events/sec). I find these three different databases that I can use, has anybody used any of these database for game-leaderboarding? If so, can you share the advantages or pains that you have encountered while doing so? According to all 3 companies, they are able to do real-time data.
You would have to evaluate the scale and performance that you require and it is difficult for me to estimate those based on the data you provided. But I can do a feature comparison of using some of these systems.
The first option is to run your leaderboards by querying DynamoDB itself and you do not need any additional systems. The advantage obviously is that there is one less component for you to manage. But I am assuming that your leaderboards need complex logic to render, and because DynamoDB api deals with key/values, you have to fetch a lot of data from DynamoDB to execute every query to render the leaderboard.
The second option you specified is Elastic Search. Great system that gives query results really fast because it stores data as an inverted index. However, you wont be able to do JOINs between your DynamoDB data and kafka stream. But you sure can run a lot of concurrent queries on Elastic. I am assuming that you need concurrent queries because you are powering an online game where m
multiple players are accessing the leaderboard at the same time.
The third option, Druid, is a hybrid between a datalake and a data warehouse. You can store large volumes of semi-structured data, but unlike Elastic, you would need to flatten the nested json data an ingest time. I have used Druid for large scale analytical processing for powering my dashboards, and it does not support as high a concurrency as Elastic.
Rockset seems to be much newer product and is a hosted service on the cloud. It says that it build inverted index like Elastic and also supports JOINs. It can auto-tail data from DynamoDB (using change-streams) and kafka. I do not see any performance numbers on the website, but the functionality is very compatible with what I would need for building a game leaderboard.
We are creating a site which will have users uploading images that's classifiable and searchable.
My question is surrounding the image storing thereof, what would make a solid maintainable solution?
I've looked at S3 - it looks promising.
If S3 is a good option, where would I store the references to the objects (along with the metadata/tags)?
Thanks :)
If I were architecting such a system, I would certainly look no further than S3 for scalability and durability for actually storing the images -- and thumbnails -- and metadata, to some extent.
S3 metadata storage is limited to 2KB (total number of bytes of all keys and all values combined), is limited to US-ASCII, and is not indexed -- you have to fetch the metadata for the specific object. For many applications, this is entirely sufficient but that's very doubtful in your case.
http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html#object-metadata
So, the question "is S3 a good option" is easily answered: if you mean among AWS services, the answer is yes, it's difficult to argue that it is the best fit.
You may also consider CloudFront -- not instead of, but in addition to S3. It can improve load times by caching your "popular" content closer to where users are located, among other things.
Where to store the references to the objects goes off into the land of "opinion based," which we don't do on Stack Overflow. The answer is, of course, "in a database," but AWS has options here.
I'm a relational database DBA, so of course, my inclination is that everything should have a relational database (such as RDS) as its authoritative data store, while others would probably say the DynamoDB NoSQL database offering would be a useful data store.
From there (wherever "there") is, CloudSearch could be populated with the metadata, keywords, etc., for processing the actual search operations, using indexes it builds which are more potentially better-suited to search-intensive operations than proper databases. I would not, however, try to use CloudSearch as the authoritative store of all your valuable metadata. Search indexes should be treated as disposable, rebuildable assets... although I fear even that statement might strike some as being opinion-based.
One thing that isn't a matter of opinion is that all of these various cloud services allow you to spin up a substantial proof-of-concept infrastructure at costs that are so low as to have been unimaginable just a few years ago... so you can try them, play with them, and throw them away if they don't do what you expect. You don't have to buy before you try.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I am looking for the database/mechanism to store the data where I can write the data and read the data with high performance.
This storage is used to for storing the Logging like important information across multiple systems. Since it's critical data which will be logged, read performance should be pretty fast as these data will be used to show history. Since we never do update on them/delete on them/or do any kinda joins, I am looking for right solution. Probably we might archive the data in long time but that's something ok to deal with.
I tried looking at different sources to understand different NoSql databases, experts opinion is always better :)
Must Have:
1. Fast Read without fail
2. Fast Write without fail
3. Random access Performance
4. Replication kinda feature, one goes down, immediately another should be up and working
5. Concurrent write/read data
Good to Have:
1. Search content like analysing the data for auditing with/without Indexes
Don't required:
1. Transactions are not required at all
2. Update never happens
3. Delete never happens
4. Joins are not required
Referred: http://kkovacs.eu/cassandra-vs-mongodb-vs-couchdb-vs-redis
Disclosure: Kevin Porter is a Senior Software Engineer at Aerospike, Inc. since May 2013. (ref)
Be sure to consider Aerospike; Aerospike dominates in the adtech space where high throughput reads and writes are a required. Aerospike is frequently touted as having "the speed of Redis with the scalability of Cassandra." For searching/querying see Aerospike's secondary index documentation.
For more information see the discussion/articles below:
Aerospike vs Cassandra
Aerospike vs Redis and Mongo
Aerospike Benchmarks
Lastly verify the performance for yourself with the One million TPS on EC2 Instructions.
Let me be the Cassandra sponsor.
Disclaimer: I don't say Cassandra is better than the others because I don't even know so deeply mongo/redis/whatever and I don't want even come into this kind of stuffs.
The reason why I suggest Cassandra is because your needs match perfectly with what Cassandra offers and your "don't required list" is a set of feature that are either not supported in Cassandra (joins for instances) or considered an anti-pattern (deletes and in some situations updates).
From your "Must Have" list, point by point
Fast Read without fail: Supported. You can choose the consistency level of each read operation deciding how much important is to retrieve the most fresh information and how much important is speed
Fast Write without fail: Same as point 1
Random access Performance: When coming in the Cassandra world you have to consider many parameters to get a random access performance but the most important that comes into my mind is the data model -- if you create a data model that scales horizontally (give a look here) and you avoid hotspots you get what you need. If you model your DB in a good way you should have O(1) for each operation since data are structured to be queried
Replication: In this Cassandra is even better than what you might think. If one node goes down nothing changes to the cluster and everything(*) keep working perfectly. Cassandra spots no single point of failure. I can tell you with older Cassandra version I've had an uptime of more than 3 years
Concurrent write/read data: Cassandra uses the lww policy (last-write-wins) to handle concurrent writes on the same key. The system supports multiple read-write and with newer protocols also async operations.
There are lots of other interesting features Cassandra offers: linear horizontal scaling is the one I appreciate more but there is also the fact that you can know the instant in which every piece of data has been updated (the timestamp of lww), counters features and so on.
(*) - if you don't use Consistency Level All which, imho, should NEVER be used in such a system.
Here's a few more links on how you can span In-Memory with Disk (DRAM, SSM, and disk storage) w/ Aerospike:
http://www.aerospike.com/hybrid-memory/
http://www.aerospike.com/docs/architecture/storage.html
I think everyone is right in terms of matching the specific DB to your specific use case. For instance, Aerospike is optimal for key-value data. Other options might be better.
By way of analogy, I'll always remember how, decades ago, a sister of mine once borrowed my computer and wrote her term paper in Microsoft Excel. Line after line was a different row of a spreadsheet. It looked ugly as heck, but, uh, okay. She got the task done. She cursed and swore at how difficult it was to edit the thing. No kidding!
Choosing the right NoSQL database for the right task will either make your job a breeze, or could cause you to curse a blue streak if you decided on the wrong basic tool for the task at hand.
Of course, every vendor's going to defend their product. I think it's best the community answer the question. Here's another Stack Overflow thread answering a similar question:
Has anyone worked with Aerospike? How does it compare to MongoDB?
btw: Do you have any more specific insights for us on what type of problem you are trying to solve?
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am evaluating a number of different NoSQL databases to store time series JSON data. ElasticSearch has been very interesting due to the query engine, I just don't know how well it is suited to storing time series data.
The data is composed of various metrics and stats collected at various intervals from devices. Each piece of data is a JSON object. I expect to collect around 12GB/day, but only need to keep the data in ES for 180 days.
Would ElasticSearch be a good fit for this data vs MongoDB or Hbase?
You can read up on ElasticSearch time-series use-case example here.
But I think columnar databases are a better fit for your requirements.
My understanding is that ElasticSearch works best when your queries return a small subset of results, and it caches such parameters to be used later. If same parameters are used in queries again, it can use these cached results together in union, hence returning results really fast. But in time series data, you generally need to aggregate data, which means you will be traversing a lot of rows and columns together. Such behavior is quite structured and is easy to model, in which case there does not seem to be a reason why ElasticSearch should perform better than columnar databases. On the other hand, it may provide ease of use, less tuning, etc all of which may make it more preferable.
Columnar databases generally provide a more efficient data structure for time series data. If your query structures are known well in advance, then you can use Cassandra. Beware that if your queries request without using the primary key, Cassandra will not be performant. You may need to create different tables with the same data for different queries, as its read speed is dependent on the way it writes to disk. You need to learn its intricacies, a time-series example is here.
Another columnar database that you can try is the columnar extension provided for Postgresql. Considering that your max db size will be about 180 * 12 = 2.16 TB, this method should work perfectly, and may actually be your best option. You can also expect some significant size compression of about 3x. You can learn more about it here.
Using time based indices, for instance an index a day, together with the index-template feature and an alias to query all indices at once there could be a good match. Still there are so many factors that you have to take into account like:
- type of queries
- Structure of the document and query requirements over this structure.
- Amount of reads versus writes
- Availability, backups, monitoring
- etc
Not an easy question to answer with yes or no, I am afraid you have to do more research yourself before you are really say that it is the best tool for the job.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
This might seem like an inane question but with all the buzz about big data I was curious as to how the typical datasets used in big data are sourced? Twitter keywords seem to be a common source - but what are the origins of the huge twitter feed files that get analysed? I saw an example where there was an analysis of election related words like Obama and Romney..has someone queried the Twitter API and downloaded effectively several terabytes of Tweets? Does Twitter even want people hitting their servers that hard? Or is this data already 'owned' by the companies doing the analytics. It might sound an odd scenario but most of the articles I have seen are fuzzy about these basic physical steps. Any links to good articles or tutorials that address these fundamental issues would be most appreciated
Here are some ideas to get sources of Big Data:
As you pointed Twitter is a great place to grab data and there's a lot of useful analysis to do. If you're taking the online course about Data Science one of the assignments is actually how to get live data from Twitter to analyze so I would recommend you take a look at this assignment as the process of getting live Twitter data is very detailed. You could let the live stream run for days and it would probably generate Gigabytes worth of data the longer it runs.
If you have a website you could get web server logs. It might not be a lot if it's a small website, but for large websites who see a lot of traffic this is a huge source of data. Think about what you could do if you had StackOverflow web server logs...
Oceanographic data which you can find at Marinexplore, they have some huge datasets available that you can download and analyze yourself if you want to analyze ocean data.
Web crawling data, for example used by search engines. You can see some open data coming from web crawl at Common Crawl which is already on Amazon S3 so ready to get your Hadoop jobs running on it ! You could also get data from Wikipedia here.
Genomic data is now available on a very large scale and you can find genome data on the 1000 genomes project via FTP.
...
More generally I would advise you look at Amazon AWS datasets which has a bunch of big datasets on various topics if you're not just looking at Twitter but Big Data in a more general context.
Most businesses get their social data from Twitter Certified data partners such as Gnip.
Note: I work for Gnip.