Performance impact of having a data access layer/service layer? - performance

I need to design a system which has these basic components:
A Webserver which will be getting ~100 requests/sec. The webserver only needs to dump data into raw data repository.
Raw data repository which has a single table which gets 100 rows/s from the webserver.
A raw data processing unit (Simple processing, not much. Removing invalid raw data, inserting missing components into damaged raw data etc.)
Processed data repository
Does it make sense in such a system to have a service layer on which all components would be built? All inter-component interaction will go through the service layers. While this would make the system easily upgradeable and maintainable, would it not also have a significant performance impact since I have so much traffic to handle?

Here's what can happen unless you guard against it.
In the communication between layers, some format is chosen, like XML. Then you build it and run it and find out the performance is not satisfactory.
Then you mess around with profilers which leave you guessing what the problem is.
When I worked on a problem like this, I used the stackshot technique and quickly found the problem. You would have thought it was I/O. NOT. It was that converting data to XML, and parsing XML to recover data structure, was taking roughly 80% of the time. It wasn't too hard to find a better way to do that. Result - a 5x speedup.

What do you see as the costs of having a separate service layer?
How do those costs compare with the costs you must incur? In your case that seems to be at least
a network read for the request
a database write for raw data
a database read of raw data
a database write of processed data
Plus some data munging.
What sort of services do you have a mind? Perhaps
saveRawData()
getNextRawData()
writeProcessedData()
why is the overhead any more than a procedure call? Service does not need to imply "separate process" or "web service marshalling".
I contend that structure is always of value, separation of concerns in your application really matters. In comparison with database activities a few procedure calls will rarely cost much.
In passing: the persisting of Raw data might best be done to a queuing system. You can then get some natural scaling by having many queue readers on separate machines if you need them. In effect the queueing system is naturally introducing some service-like concepts.

Personally feel that you might be focusing too much on low level implementation details when designing the system. Before looking at how to lay out the components, assemblies or services you should be thinking of how to architect the system.
You could start with the following high level statements from which to build your system architecture around:
Confirm the technical skill set of the development team and the operations/support team.
Agree on an initial finite list of systems that will integrate to your service, the protocols they support and some SLAs.
Decide on the messaging strategy.
Understand how you will deploy your service/system.
Decide on the choice of middleware (ESBs, Message Brokers, etc), databases (SQL, Oracle, Memcache, DB2, etc) and 3rd party frameworks/tools.
Decide on your caching and data latency strategy.
Break your application into the various areas of business responsibility - This will allow you to split up the work and allow easier communication of milestones during development/testing and implementation.
Design each component as required to meet the areas of responsibility. The areas of responsibility should automatically lead you to decide on how to design component, assembly or service.
Obviously not all of the above will match your specific case but I would suggest that they should at least be given some thought.
Good luck.

Abstraction and tiering will introduce latency, but the real question is, what are you GAINING to make the cost(s) worthwhile? Loose coupling, governance, scalability, maintainability are worth real $.
Even the best-designed layered app will exhibit more latency than an app talking directly to a DB. Users who know the original system will feel the difference. They may not like it, so this can be a political issue as much as a technical one.

Related

Which caching mechanism to use in my spring application in below scenarios

We are using Spring boot application with Maria DB database. We are getting data from difference services and storing in our database. And while calling other service we need to fetch data from db (based on mapping) and call the service.
So to avoid database hit, we want to cache all mapping data in cache and use it to retrieve data and call service API.
So our ask is - Add data in Cache when it gets created in database (could add up-to millions records) and remove from cache when status of one of column value is "xyz" (for example) or based on eviction policy.
Should we use in-memory cache using Hazelcast/ehCache or Redis/Couch base?
Please suggest.
Thanks
I mostly agree with Rick in terms of don't build it until you need it, however it is important these days to think early of where this caching layer would fit later and how to integrate it (for example using interfaces). Adding it into a non-prepared system is always possible but much more expensive (in terms of hours) and complicated.
Ok to the actual question; disclaimer: Hazelcast employee
In general for caching Hazelcast, ehcache, Redis and others are all good candidates. The first question you want to ask yourself though is, "can I hold all necessary records in the memory of a single machine. Especially in terms for ehcache you get replication (all machines hold all information) which means every single node needs to keep them in memory. Depending on the size you want to cache, maybe not optimal. In this case Hazelcast might be the better option as we partition data in a cluster and optimize the access to a single network hop which minimal overhead over network latency.
Second question would be around serialization. Do you want to store information in a highly optimized serialization (which needs code to transform to human readable) or do you want to store as JSON?
Third question is about the number of clients and threads that'll access the data storage. Obviously a local cache like ehcache is always the fastest option, for the tradeoff of lots and lots of memory. Apart from that the most important fact is the treading model the in-memory store uses. It's either multithreaded and nicely scaling or a single-thread concept which becomes a bottleneck when you exhaust this thread. It is to overcome with more processes but it's a workaround to utilize todays systems to the fullest.
In more general terms, each of your mentioned systems would do the job. The best tool however should be selected by a POC / prototype and your real world use case. The important bit is real world, as a single thread behaves amazing under low pressure (obviously way faster) but when exhausted will become a major bottleneck (again obviously delaying responses).
I hope this helps a bit since, at least to me, every answer like "yes we are the best option" would be an immediate no-go for the person who said it.
Build InnoDB with the memcached Plugin
https://dev.mysql.com/doc/refman/5.7/en/innodb-memcached.html

Will using IMDG on top of NoSql really boost the performance of the application?

Does using a IMDG layer (Gemfire, HazelCast) on top of the NoSql db (MongoDB) improve the performance and scaleability of the application?
Maybe. As always, "It Depends".
On what? Primarily the characteristics of your data and your data access patterns. A few considerations (of many!) might be:
How much data do you have, and is it feasible and/or cost-effective to even attempt to cache a portion of it in memory?
Are there "hot" subsets of the data which are candidates for caching? For example, in social apps the last x hours of data drive virtually all the load.
What cache hit % do you need to achieve in order to benefit from the cache? (This is driven by your app's sensitivity to latency)
What is the ratio between reads/writes?
What are the consistency and reliability requirements?
What types of reads does your app do? Complex queries? Simple GETs? In which mix?
Caching layers can be incredibly effective when used correctly. But they aren't a silver bullet. Don't expect to just slap one over your existing store and magically get "faster" or "more scalable"...

Is performance worse when putting database to a dedicated server?

I heard that one way to scale your system is to use different machine for web server, database server, and even use multiple instances for each type of server
I wonder how could this improve performance over the one-server-for-everything model? Aren't there bottle necks in the connection between those servers? Moreover, you will have to care about synchronization while accessing the database server from different web server.
If your infrastructure is small enough then yes, 1 server for everything is (probably) the best way to do things, however when your size starts to require that you use more then 1 server, scaling the size of your single box can become much more expensive then having multiple cheaper servers. This also means that you can have more failure tolerance (if one server goes down, the other(s) can take over). As for synchronizing data, on the database side that is usually achieved by using clustering or replicating, on the application side it can be achieved with the likes of memcached or saving to the drive, and web servers themselves don't really need to be synchronized. Network bottlenecks on a local network (like your servers would be from one another) are negligible.
Having numerous servers may appear to be an attractive solution. One problem which often occurs is the latency that arises from communication between the servers. Even with fiber inter-connects it will be slower than if they reside on the same server. Of course, in a single server-solution, if one server application does a lot of work it may starve the DB application of needed CPU resources.
Another issue which may turn up is that of SANs. Proponents of SANs will say that they are just as fast as locally attached storage. The purpose of SANs is to cut costs on storage. Even if the SAN were to use the same high-performance disks as the local solution (wiping out the cost savings) you still have a slower connection and more simultaneous users to contend with on the SAN.
Conventional wisdom has it that a DB should be SQL-based with normalized data. It is worthwile to spend some time weighing pros and cons (yes SQL has cons) against each other.
Since "time-immemorial" (at least the last twenty years) indifferent programmers have overloaded servers with stuff they are too lazy to implement in the client. Indifferent (or ignorant) architects allow this practice to continue. End result: sluggish c/s implementations which are close to useless. Tripling the server park is a desperate "week-before-delivery" measure which - at best - results in a marginal performance increase. Often you lose performance instead.
DBs should not be bothered with complex requests involving multiple tables. Simple requests filtered by the client is the way to go.
One thing to try might be to put framework/SOAP-handling on one server and let it send binary requests to the DB server which answers with binary responses (trying to make sense of a SOAP request is very CPU-intensive and something which you don't want to leave to the DB application which will be more or less choked anyway). This way you'll have SOAP throttling only one part of the environment (the interface to users/other framework users) and the rest of the interfaces will be as efficient as they can be (binary).
Another thing - if the application allows it - is to put a cache front-end on the DB-application. The purpose of this cache is to do as much repetitive stuff as possible without involving the DB itself. This way the DB is left with handling fewer but (perhaps) more complicated requests instead of doing everything.
Oh, don't let clients send SQL statements directly to the DB. You'd be suprised at the junk a DB has to contend with.

Has Object Prevalance (Prevayler, Madeleine) been used in a Production System?

Has Object Prevalance mechanisms been used in an actual Production system? I'm referring to something like Prevayler or Madeleine
The only thing I've found is Instiki, a wiki engine. But since they started they've switched to SQLite. (The actual instiki page is down)
A company I used to work for used Prevayler as part of a computer-based student examination/assessment system for about five or six years.
Prevayler was used to store the state of candidates’ tests on a server physically located within a single testing centre. The volume of data stored was fairly low, since at most there would only be a few hundred candidates taking a test at a single testing centre. Therefore it was practical to run Prevayler on commodity hardware in 2004 – the ‘server’ was in most cases just a typical low-end desktop machine temporarily borrowed for the purpose of running an exam.
The idea was that if a candidate’s computer crashed while they were taking the test, then they could quickly resume the test on the same or different computer. It worked pretty well.
There were occasional difficulties when some new requirement led to a change to the object model, since by default Prevayler close-couples the object model to the representation of data on disk. This wasn’t actually a major problem for us, since changes to the object model occurred between exams at which point we could usually afford to throw old data away (with some exceptions due to bad design on our part).
There are lots of things you can do to make it feasible to change the object model, it’s a matter of what’s best for your application. Throwing old data away was generally the best solution for us.
There was also a back-end system that aggregated candidates’ tests from all testing centres into an SQL database. That stored a higher volume of data than Prevayler could have reasonably coped with at the time. It would probably be feasible to use Prevayler there today, but I don’t think the usage patterns would have suited Prevayler particularly well, since most of the data tended to be written, read once for marking, then forgotten about and treated as archive data unless the result of a test got queried.
That company has sinced moved away from Prevayler but the reason for that was more political than it was technical.
Well, we're using Prevayler in a project that's aiming towards production by sometime next year, but we're not close enough to give any real scouting report. We think it's going to work...
"LMAX is a new retail financial trading platform. As a result it has to process many trades with low latency. The system is built on the JVM platform and centers on a Business Logic Processor that can handle 6 million orders per second on a single thread. The Business Logic Processor runs entirely in-memory using event sourcing."
http://martinfowler.com/articles/lmax.html

Recommendation for a large-scale data warehousing system

I have a large amount of data I need to store, and be able to generate reports on - each one representing an event on a website (we're talking over 50 per second, so clearly older data will need to be aggregated).
I'm evaluating approaches to implementing this, obviously it needs to be reliable, and should be as easy to scale as possible. It should also be possible to generate reports from the data in a flexible and efficient way.
I'm hoping that some SOers have experience of such software and can make a recommendation, and/or point out the pitfalls.
Ideally I'd like to deploy this on EC2.
Wow. You are opening up a huge topic.
A few things right off the top of my head...
think carefully about your schema for inserts in the transactional part and reads in the reporting part, you may be best off keeping them separate if you have really large data volumes
look carefully at the latency that you can tolerate between real-time reporting on your transactions and aggregated reporting on your historical data. Maybe you should have a process which runs periodically and aggregates your transactions.
look carefully at any requirement which sees you reporting across your transactional and aggregated data, either in the same report or as a drill-down from one to the other
prototype with some meaningful queries and some realistic data volumes
get yourself a real production quality, enterprise ready database, i.e. Oracle / MSSQL
think about using someone else's code/product for the reporting e.g. Crystal/BO / Cognos
as I say, huge topic. As I think of more I'll continue adding to my list.
HTH and good luck
#Simon made a lot of excellent points, I'll just add a few and re-iterate/emphasize some others:
Use the right datatype for the Timestamps - make sure the DBMS has the appropriate precision.
Consider queueing for the capture of events, allowing for multiple threads/processes to handle the actual storage of the events.
Separate the schemas for your transactional and data warehouse
Seriously consider a periodic ETL from transactional db to the data warehouse.
Remember that you probably won't have 50 transactions/second 24x7x365 - peak transactions vs. average transactions
Investigate partitioning tables in the DBMS. Oracle and MSSQL will both partition on a value (like date/time).
Have an archiving/data retention policy from the outset. Too many projects just start recording data with no plans in place to remove/archive it.
Im suprised none of the answers here cover Hadoop and HDFS - I would suggest that is because SO is a programmers qa and your question is in fact a data science question.
If youre dealing with a large number of queries and large processing time, you would use HDFS (a distributed storage format on EC) to store your data and run batch queries (I.e. analytics) on commodity hardware.
You would then provision as many EC2 instances as needed (hundreds or thousands depending on how big your data crunching requirements are) and run map reduce queires against.your data to produce reports.
Wow.. This is a huge topic.
Let me begin with databases. First get something good if you are going to have crazy amounts to data. I like Oracle and Teradata.
Second, there is a definitive difference between recording transactional data and reporting/analytics. Put your transactional data in one area and then roll it up on a regular schedule into a reporting area (schema).
I believe you can approach this two ways
Throw money at the problem: Buy best in class software (databases, reporting software) and hire a few slick tech people to help
Take the homegrown approach: Build only what you need right now and grow the whole thing organically. Start with a simple database and build a web reporting framework. There are a lot of descent open-source tools and inexpensive agencies that do this work.
As far as the EC2 approach.. I'm not sure how this would fit into a data storage strategy. The processing is limited which is where EC2 is strong. Your primary goal is effecient storage and retreival.

Resources