High Frequency Database to use with ruby [closed] - ruby

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I want to scrape a big amount of webpages (1000/second) and save 1-2 numbers from this web pages into a database. I want to manage this Workers with RabbitMQ, but I also have to write the data somewhere.
Heroku PostgreSQL has a concurrency limit of 60 requests in their cheapest production tier.
Is PostgreSQL the best solution for this job?
Is it possible to setup a Postgres Database to perform 1000 writes per second in development on my local machine?

Is it possible to setup a Postgres Database to perform 1000 writes per second in development on my local machine?
Try it and see. If you've got an SSD, or don't require crash safety, then you almost certainly can.
You will find that with anything you choose, you have to make trade-offs with durability and write latencies.
If you want to commit each record individually, in strict order, you should be able to achieve that on a laptop with a decent SSD. You will not possibly get it on something like a cheap AWS instance, a server with a spinning rust hard drive, etc though, as they don't have good enough disk flush rates. (pg_test_fsync is a handy tool for looking at this). This will be true of anything that's doing genuine atomic commits of individual records to durable storage, not just PostgreSQL - about the best rate you're going to get is the max disk flush rate / 2 unless it's a purely append-only system, in which case the commit rate can be equal to the disk flush rate.
If you want to get higher throughput, you'll need to batch writes together and commit them in groups to spread the disk sync overhead. In the case of PostgreSQL, the commit_delay option can be useful to batch commits together. Better still, buffer a few changes client-side and do multi-valued inserts. Turning off synchronous_commit for a transaction if you don't need a hard guarantee it's committed before returning control to your program.
I haven't tested it, but expect Heroku will allow you to set both these params on your sessions using SET synchronous_commit = off or SET commit_delay = .... You should test it and see. In fact, you should do a simulated workload benchmark and see if you can make it go fast enough for your needs.
If you can't, you'll be able to use alternate hosting that will with appropriate configuration.
See also: How to speed up insertion performance in PostgreSQL

PostgreSQL is perfectly capable of handling such job. Just to give you an idea, PostgreSQL 9.2 is expected to handle up to 14.000 writes per second, but this largely depends on how you configure, design and manage the database, and on the available hardware (disk performance, RAM, etc.).
I assume the limit imposed by Heroku is to avoid potential overloads. You may want to consider an installation of PostgreSQL on a custom server or alternative solutions. For instance, Amazon recently announced the support for PostgreSQL on RDS.
Finally, I just want to mention that for the majority of standard tasks, the "best solution" is largely dependent on your knowledge. An efficiently configured MySQL is better than a badly configured PostgreSQL, and vice-versa.
I know companies that were able to reach unexpected results with a specific database by highly optimizing the setup and the configuration of the engine. There are exceptions, indeed, but I don't think they apply to your case.

Related

Which caching mechanism to use in my spring application in below scenarios

We are using Spring boot application with Maria DB database. We are getting data from difference services and storing in our database. And while calling other service we need to fetch data from db (based on mapping) and call the service.
So to avoid database hit, we want to cache all mapping data in cache and use it to retrieve data and call service API.
So our ask is - Add data in Cache when it gets created in database (could add up-to millions records) and remove from cache when status of one of column value is "xyz" (for example) or based on eviction policy.
Should we use in-memory cache using Hazelcast/ehCache or Redis/Couch base?
Please suggest.
Thanks
I mostly agree with Rick in terms of don't build it until you need it, however it is important these days to think early of where this caching layer would fit later and how to integrate it (for example using interfaces). Adding it into a non-prepared system is always possible but much more expensive (in terms of hours) and complicated.
Ok to the actual question; disclaimer: Hazelcast employee
In general for caching Hazelcast, ehcache, Redis and others are all good candidates. The first question you want to ask yourself though is, "can I hold all necessary records in the memory of a single machine. Especially in terms for ehcache you get replication (all machines hold all information) which means every single node needs to keep them in memory. Depending on the size you want to cache, maybe not optimal. In this case Hazelcast might be the better option as we partition data in a cluster and optimize the access to a single network hop which minimal overhead over network latency.
Second question would be around serialization. Do you want to store information in a highly optimized serialization (which needs code to transform to human readable) or do you want to store as JSON?
Third question is about the number of clients and threads that'll access the data storage. Obviously a local cache like ehcache is always the fastest option, for the tradeoff of lots and lots of memory. Apart from that the most important fact is the treading model the in-memory store uses. It's either multithreaded and nicely scaling or a single-thread concept which becomes a bottleneck when you exhaust this thread. It is to overcome with more processes but it's a workaround to utilize todays systems to the fullest.
In more general terms, each of your mentioned systems would do the job. The best tool however should be selected by a POC / prototype and your real world use case. The important bit is real world, as a single thread behaves amazing under low pressure (obviously way faster) but when exhausted will become a major bottleneck (again obviously delaying responses).
I hope this helps a bit since, at least to me, every answer like "yes we are the best option" would be an immediate no-go for the person who said it.
Build InnoDB with the memcached Plugin
https://dev.mysql.com/doc/refman/5.7/en/innodb-memcached.html

Which are the cons of a purely stream-based architecture against a Lambda architecture? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Disclaimer: I'm not a real-time architectures expert, I'd like only to throw a couple of personal considerations and evaluate what others would suggest or point out.
Let's imagine we'd like to design a real-time analytics system. Following, Lambda architecture Nathan Marz's definition, in order to serve the data we would need a batch processing layer (i.e. Hadoop), continuously recomputing views from a dataset of all the data, and a so-called speed layer (i.e. Storm) that constantly processes a subset of the views (made by the events coming in after the last full recomputation of the batch layer). You query your system by merging the results of the two together.
The rationale behind this choice makes perfect sense to me, and its a combination of software engineering and systems engineering observations. Having a ever-growing master dataset of immutable timestamped facts makes the system resilient to human errors in computing the views (if you do an error, you just fix it and recompute them in the batch layer) and enables the system to answer to virtually any query that would come up in the future. Also, such datastore would require to support only random reads and batch inserts, whereas the datastore for the speed/real-time part would require to support efficiently random reads and random writes, increasing its complexity.
My objection/trigger for a discussion about this is that, in certain scenarios, this approach might be an overkill. For the sake of discussion, assume we do a couple of simplifications:
Let's assume that in our analytics system we can define beforehand an immutable set of use-cases\queries that hour system needs to be able to provide, and that they won't change in the future.
Let's assume that we have a limited amount of resources (engineering power, infrastructure, etc) to implement it. Storing the whole set of elementary events coming to our system, instead of already precomputing views\aggregates, may just be too expensive.
Let's assume that the we succesfully minimize the impact of human mistakes (...).
The system would still need to be scalable and handle ever-increasing traffic and data.
Given these observations, I'd like to know what would stop us from designing a fully stream-oriented architecture. What I imagine is an architecture where the events (i.e. page views) are pushed inside a stream, that could be RabbitMQ + Storm or Amazon Kinesis, and where the consumers of such streams would directly update the needed views through random writes/updates to a NoSQL database (i.e. MongoDB).
At a first approximation, it looks to me that such architecture could scale horizontally. Storm can be clusterized, and Kinesis expected QoS could also be reserved upfront. More incoming events would mean more stream consumers, and as they are totally independent nothing stops us from adding more. Regarding the database, sharding it with a proper policy would let us distribute the increasing number of writes to an increasing number of shards. In order to avoid reads to be affected, each shard could have one or more read-replicas.
In terms of reliability, Kinesis promises to reliabily store your messages for up to 24 hours, and a distributed RabbitMQ (or whatever queue system of your choice) with proper usage of acknowledges' mechanisms could probably satisfy the same requirement.
Amazon's documentation on Kinesis deliberately (I believe) avoids to lock you into a specific architectural solution, but my overall impression is that they would like to push developers to simplify the Lambda architecture and arrive to a fully stream based solution similar to the one I've exposed.
To be slighly more compliant to the Lambda architecture requirements, nothing stops us from having, in parallel with the consumers constantly updating our views, a set of consumers that process the incoming events and store them as atomic immutable units in a different datastore that could be used in the future to produce new views (via Hadoop for instance) or recompute faulty data.
What's your opinion on this reasoning? I'd like to know in which scenarios a purely stream-based architecture would fail to scale up, and if you have any other observations, pros\cons of a Lambda architecture vs a stream-based architecture.

Is application caching in a server farm worth the complexity? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I've inherited a system where data from a SQL RDBMS that is unlikely to change is cached on a web server.
Is this a good idea? I understand the logic of it - I don't need to query the database for this data with every request because it doesn't change, so just keep it in memory and save a database call. But, I can't help but think this doesn't really give me anything. The SQL is basic. For example:
SELECT StatusId, StatusName FROM Status WHERE Active = 1
This gives me fewer than 10 records. My database is located in the same data center as my web server. Modern databases are designed to store and recall data. Is my application cache really that much more efficient than the database call?
The problem comes when I have a server farm and have to come up with a way to keep the caches in sync between the servers. Maybe I'm underestimating the cost of a database call. Is the performance benefit gained from keeping data in memory worth the complexity of keeping each server's cache synchronized when the data does change?
Benefits of caching are related to the number of times you need the cached item and the cost of getting the cached item. Your status table, even though only 10 rows long, can be "costly" to get if you have to run a query every time: establish connection, if needed, execute a query, pass data over the network, etc. If used frequently enough, the benefits could add up and be significant. Say, you need to check some status 1000 times a second or every website request, you have saved 1000 queries and your database can do something more useful and your network is not loaded with chatter. For your web server, the cost of retrieving the item from cache is usually minimal (unless you're caching tens of thousands or hundreds of thousands of items). So pulling something from the cache will be quicker than querying a database almost every time. If your database is the bottleneck of your system (which is the case in a lot of systems) then caching definitely is useful.
But bottom line is, it is hard to say yes or no without running benchmarks or knowing the details of how you're using the data. I just highlighted some of the things to consider.
There are other factors which might come into play, for example the use of EF can add considerable extra processing to a simple data retrieval. Quantity of requests, not just volume of data could be a factor.
Future design might influence your decision - perhaps the cache gets moved elsewhere and is no longer co-located.
There's no right answer to your question. In your case, maybe there is no advantage. Though there is already a disadvantage to not using a cache - you have to change existing code.

Oracle setup required for heavy-ish load

I am trying to make a comparison between a system setup using Hadoop and HBase and achieving the same using Oracle DB as back end. I lack knowledge on the Oracle side of things so come to a fair comparison.
The work load and non-functional requirements are roughly this:
A) 12M transactions on two tables with one simple relation and multiple (non-text) indexes within 4 hours. That amounts to 833 transactions per second (TPS), sustained. This needs to be done every 8 hours.
B) Make sure that all writes are durable (so a running transaction survives a machine failure in case of a clustered setup) and have a decent level of availability? With a decent level of availability, I mean that regular failures such as disk and a single network interface / tcp connection drop should not require human intervention. Rare failures, may require intervention, but should be solved by just firing up a cold standby that can take over quickly.
C) Additionally add another 300 TPS, but have these happen almost continuously 24/7 across many tables (but all in pairs of two with the same simple relation and multiple indexes)?
Some context: this workload is 24/7 and the system needs to hold 10 years worth of historical data available for live querying. Query performance can be a bit worse than sub-second, but must be lively enough to consider for day-to-day usage. The ETL jobs are setup in such a way that there is little churn. Also in a relational setup, this workload would lead to little lock contention. I would expect index updates to be the major pain. To make a comparison as fair as possible, I would expect the loosest consistency level that Oracle provides.
I have no intention of bashing Oracle. I think it is a great database for many uses. I am trying to get a feeling for the tradeoff there is between going open source (and NoSQL) like we do and using a commercially supported, proven setup.
Nobody can answer this definitively.
When you go buy a car you can sensibly expect that its top speed, acceleration and fuel consumption will be within a few percent of values from independent testing. The same does not apply to software in general nor to databases in particular.
Even if you had provided exact details of the hardware, OS and data structures, along with full details of the amount of data stored as well as transactions, the performance could easily vary by a factor of 100 times depending on the pattern of usage (due to development of hot spots of record caching, disk fragmentation).
However, having said that there are a few pointers I can give:
1) invariably a nosql database will outperform a conventional DBMS - the reason d'etre for nosql databases is performance and parallelization. That does not mean that conventional DBMS's are redundant - they provide much greater flexibility for interacting with data
2) for small to mid range data volumes, Oracle is relatively slow in my experience compared with other relational databases. I'm not overly impressed with Oracle RAC as a scalable solution either.
3) I suspect that the workload would require a mid-range server for consistent results (something in the region of $8k+) running Oracle
4) While having a hot standby is a quick way to cover all sorts of outages, in a lot of cases, the risk/cost/benefit favours approaches such as RAID, multiple network cards, UPS rather than the problems of maintaining a synchronized cluster.
5) Support - have you ever bothered to ask the developers of an open source software package if they'll provide paid for support? IME, the SLAs / EULAs for commercial software are more about protecting the vendor than the customer.
So if you think its worthwhile considering, and cost is not a big issue, then the best answer would be to try it out for yourself.
No offense here, but if you have little Oracle knowledge there is really no way you can do a fair comparison. I've worked with teams of very experienced Oracle DBAs and sys admins who would argue about setups for comparison tests (the hardware/software setup variables are almost infinite). Usually these tests were justifications for foregone conclusions about infrastructure direction (money being a key issue as well).
Also, do you plan on hiring a team of Hadoop experts to manage your company's data infrastructure? Oracle isn't cheap, but you can find very seasoned Oracle professionals (from DBAs to developers to analysts), not too sure about hadoop admins/dbas...
Just food for thought (and no, I don't work for Oracle ;)

Is memcached a dinosaur in comparison to Redis? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I have worked quite a bit with memcached the last weeks and just found out about Redis. When I read this part of their readme, I suddenly got a warm, cozy feeling in my stomach:
Redis can be used as a memcached on steroids because is as fast as
memcached but with a number of
features more.
Like memcached, Redis also supports setting timeouts to keys so
that this key will be automatically
removed when a given amount of time
passes.
This sounds amazing. I'd also found this page with benchmarks: http://www.ruturaj.net/redis-memcached-tokyo-tyrant-mysql-comparison
So, honestly - Is memcache really that old dinousaur that is a bad choice from a performance perspective when compared to this newcomer called Redis?
I haven't heard lot about Redis previously, thereby the approach for my question!
Depends on what you need, in general I think that:
You should not care too much about performances. Redis is faster per core with small values, but memcached is able to use multiple cores with a single executable and TCP port without help from the client. Also memcached is faster with big values in the order of 100k. Redis recently improved a lot about big values (unstable branch) but still memcached is faster in this use case. The point here is: nor one or the other will likely going to be your bottleneck for the query-per-second they can deliver.
You should care about memory usage. For simple key-value pairs memcached is more memory efficient. If you use Redis hashes, Redis is more memory efficient. Depends on the use case.
You should care about persistence and replication, two features only available in Redis. Even if your goal is to build a cache it helps that after an upgrade or a reboot your data are still there.
You should care about the kind of operations you need. In Redis there are a lot of complex operations, even just considering the caching use case, you often can do a lot more in a single operation, without requiring data to be processed client side (a lot of I/O is sometimes needed). This operations are often as fast as plain GET and SET. So if you don't need just GET/SET but more complex things Redis can help a lot (think at timeline caching).
Without an use case is hard to pick the right now, but I think that for a lot of things Redis makes sense since even when you don't want to use it as a DB, being a lot more capable you can solve more problems, not just caching but even messaging, ranking, and so forth.
P.s. of course I could be biased since I'm the lead developer of the Redis project.
So, honestly - Is memcache really that
old dinousaur that is a bad choice
from a performance perspective when
compared to this newcomer called
Redis?
Comparing features set then Redis has way more functionality;
Comparing ease of installation Redis is also a lot easier. No dependencies required;
Comparing active development Redis is also better;
I believe memcached is a little bit faster than Redis. It does not touch the disc at all;
My opinion is that Redis is better product than memcached.
Memcache is an excellent tool still and VERY reliable.
instead of looking at this issue from the perspective getting down the who is faster at the < 100 ms range, look at the performance per "class" of the software.
Does it use only local ram? -> fastest
Does it use remote ram? -> fast
Does it use ram plus hardddisk -> oh hurm.
Does it use only harddisk -> run!
What memcached does that Redis doesn't do is least-recently-used eviction of values from the cache. With memcached, you can safely set as many values as you like, and when they overflow memory, the ones you haven't used recently will be deleted. With Redis, you can only approximate this, by setting a timeout on everything; when it needs to free up memory, it will look at three random keys and delete the one that's the closest to expiring.
That's the main difference, if you're just using it as a cache.
You may also want to look at Membase.
http://www.northscale.com/products/membase_server.html
I have not used it, but it appears to be similar to Redis in that it is a memory-centric KV store with persistence. The major differences from what I can see are:
Redis has significantly more data manipulation capability (ordered sets, etc.)
Redis has a pending Redis Cluster project to add horizontal scalability
Redis has a single tier of data offload to disk (VM) based on a hybrid algorithm that considers both LRU and the size of the object.
Membase uses the memcached wire protocol - useful as an upgrade path for existing applications
Membase is set up to scale horizontally using a distributed hashtable approach
Membase can support multiple tiers of data offload using an LRU approach (very seldom used goes to disk, somewhat seldom stuff goes to SSD, frequent stuff stays in RAM)
Not sure about TTL capability in Membase.
The choice may depend on the degree to which your application can leverage the extra data manipulation functionality in Redis.
Hazelcast supports the memcached protocol natively
https://web.archive.org/web/20140601010929/http://hazelcast.org/docs/latest/manual/html-single/hazelcast-documentation.html
and thus is a modern alternative to memcached. You should try all the solutions to see what works best for you.

Resources