When would I want to create more than one verticles (assuming I am using non-blocking db clients in a stateless microservice)? - microservices

Assuming
I am building a stateless micro-service which exposes a few simple API endpoints (e.g., a read-through cache, save an entry to database),
I am using the non-blocking database clients e.g., mysql or redis and
I always want my microservices to speak to each other via HTTP (by placing EC2 instances behind a load balancer)
Questions
When will I want to use more than 1 standard verticles (i.e., write the whole microservice as a single verticle and deploy n instances of it (n = num of event-loop threads))?. Won't adding more verticles only add to serialization and context-switching costs?
Lets say I split the microservice into multiple standard verticles (for whatever reason). Wouldn't deploying n (n = num of event-loop threads) instances of each always give better performance than deploying a different ratio of instances. As each verticle is just a listener on an address and it will mean every event-loop thread can handle all kinds of messages and they are load balanced already.
When will I want to run my application in cluster mode? Based on docs, I get the feeling that cluster mode makes sense only when you have multiple verticles and that too when you have an actual use-case for clustering e.g., different EC2 instances handle requests for different users to help with data locality (say using ignite)
P.S., please help if even if you can answer one of the above questions.

I always want my microservices to speak to each other via HTTP (by placing EC2 instances behind a load balancer)
It doesn't make much sence to use Vertx if you already went for this overcomplicated approach.
Vertx is using Event Bus for in-cluster communication, eliminating the need for HTTP as well as LB in front.
Answers:
Why should it? If verticles are not talking to each other, where the serialization overhead should occur?
If your verticles are using non-blocking calls (and thus are multithreded), you won't see any difference between 1 or N instances on the same machine. Also if your verticle starts a (HTTP) server over a certain port, then the all instances will share that single server accross all thread (vertx is doing some magic reroutings here)
Cluster mode is the thing which I mentioned in the beginning. This is the proper way to distribute and scale you microservices.

A verticle is a way to structure your code. So, you'd want verticle of another type probably when your main verticle grows too big. How big? That depends on your preferences. I try to keep them rather small, about 200 LOC at the most, to do one thing.
Not necessarily. Different verticles can perform very different tasks, at different paces. Having N instances of all of them is not necessarily bad, but rather redundant.
Probably never. Clustered mode was a thing before microservices. Using it adds another level of complexity (cluster manager, for example Hazelcast), and it also means you cannot be as polyglot.

Related

Microservices interdependency

One of the benefits of Microservice architecture is one can scale heavily used parts of the application without scaling the other parts. This supposedly provides benefits around cost.
However, my question is, if a heavily used microservice is dependent on other microservice to do it's work wouldn't you have to scale the other services as well seemingly defeating the purpose. If a microservice is calling other micro service at real time to do it's job, does it mean that Micro service boundaries are not established correctly.
There's no rule of thumb for that.
Scaling usually depends on some metrics and when some thresholds are reached then new instances are created. Same goes for the case when they are not needed anymore.
Some services are doing simple, fast tasks, like taking an input and writing it to the database and others may be longer running task which can take any amount of time.
If a service that needs scale is calling a service that can easily handle heavy loads in a reliable way then there is no need to scale that service.
That idea behind scaling is to scale up when needed in order to support the loads and then scale down whenever loads get in the regular metrics ranges in order to reduce the costs.
There are two topics to discuss here.
First is that usually, it is not a good practice to communicate synchronously two microservices because you are coupling them in time, I mean, one service has to wait for the other to finish its task. So normally it is a better approach to use some message queue to decouple the producer and consumer, this way the load of one service doesn't affect the other.
However, there are situations in which it is necessary to do synchronous communication between two services, but it doesn't mean necessarily that both have to scale the same way, for example: if a service has to make several calls to other services, queries to database, or other kind of heavy computational tasks, and one of the service called only do an array sorting, probably the first service has to scale much more than the second in order to process the same number of request because the threads in the first service will be occupied longer time than the second

Microservices - Connection Pooling when connecting to a single legacy database

I am working on developing micro services for a monolithic application using spring boot + spring cloud + spring JDBC.
Currently, the application is connecting to a single database through tomcat JNDI connection pool.
We have a bottleneck here, not to change the database architecture at this point of time because of various reasons like large number of db objects,tight dependencies with other systems,etc.
So we have isolated the micro services based on application features. My concern is if we develop microservices with each having its own connection pool, then the number of connections to the database can increase exponentially.
Currently, I am thinking of two solutions
To calculate the number of connections that is being used currently by each application feature and arriving at max/min connection params per service- which is a very tedious process and we don't have any mechanism to get the connection count per app feature.
To develop a data-microservice with a single connection pool which gets the query object from other MS, triggers the query to the database and returns the resultset object to the caller.
Not sure whether the second approach is a best practice in the microservices architechture.
Can you please suggest any other standard approaches that can be helpful in the
current situation?
It's all about the tradeoffs.
To calculate the number of connections that is being used currently by each application feature and arriving at max/min connection params per service.
Cons: As you said, some profiling and guesswork needed to reach the sweet number of connection per app feature.
Pros: Unlike the second approach, you can avoid performance overhead
To develop a data-microservice with a single connection pool which gets the query object from other MS, triggers the query to the database and returns the resultset object to the caller.
Pros : Minimal work upfront
Cons: one more layer, in turn one more failure point. Performance will degrade as you have to deal with serialization -> Http(s) network latency -> deserialization->(jdbc fun stuff which is part of either approach) -> serialization -> Http(s) network latency -> deserialization. (In your case this performance cost may be negligible. But if every millisecond counts in your service, then this is a huge deciding factor)
In my opinion, I wouldn't split the application layer alone until I have analyzed my domains and my datastores.
This is a good read: http://blog.christianposta.com/microservices/the-hardest-part-about-microservices-data/
I am facing a similar dilemma at my work and I can share the conclusions we have reached so far.
There is no silver bullet at the moment, so:
1 - Calculate the number of connections dividing the total desired number of connections for the instances of microservices will work well if you have a situation where your microservices don't need to drastically elastic scale.
2 - Not having a pool at all and let the connections be opened on demand. This is what is being used in functional programming (like Amazon lambdas). It will reduce the total number of open connections but the downside is that you lose performance as per opening connections on the fly is expensive.
You could implement some sort of topic that let your service know that the number of instances changed in a listener and update the total connection number, but it is a complex solution and goes against the microservice principle that you should not change the configurations of the service after it started running.
Conclusion: I would calculate the number if the microservice tend to not grow in scale and without a pool if it does need to grow elastically and exponentially, in this last case make sure that a retry is in place in case it does not get a connection in the first attempt.
There is an interesting grey area here awaiting for a better way of controlling pools of connections in microservices.
In time, and to make the problem even more interesting, I recommend reading the
article About Pool Sizing from HikariCP: https://github.com/brettwooldridge/HikariCP/wiki/About-Pool-Sizing
The ideal concurrent connections in a database are actually smaller than most people think.

akka actor model vs java usage in following scenario

I want to know the applicability of the Akka Actor model.
I know it is useful in the case a huge number of Actor instances are created and destroyed. e.g. a call server, where every incoming call creates an actor instance and communicates with few other actors and get killed after the call is over.
Is it also useful in the following scenario :
A server has a few processing elements (10~50) implemented over Actors. The lifetime of these processing elements is infinite. some of them do not maintain state and a few maintain state. The processing elements process the message and pass the message to other actors in a fixed manner. The system receives a huge number of messages from outside and gets passed through processing elements and goes out of the system.
My gut feeling is that we cannot get any advantage by using Akka Actor model and even implementing this server in Scala. Because the use case for which Akka is designed, is not applicable here. If the scale-up meant that processing elements be increased dynamically then it would be applicable.
For fixed topologies, I think if i implement it in Java, it is going to be more beneficial in terms of raw performance. The 'immutability' feature of Scala leads to more copies and so reduces performance. So i believe i better stick to Java.
Is my understanding correct? I a nut shell i want to know why i should leave Java and use Scala/Akka for the application scenario above. and my target is to process 1 million messages per second.
If this question is still actual...
Scala vs. Java
Scala gives productivity to developers.
Immutability decreases debugging to almost zero level.
GC perfectly copes with waste immutables.
Akka Actors vs. other means
Akka has dispatcher that distributes all tasks across fixed thread pool. This allows to evenly consume available resources. This approach is much better than the fixed worker threads — the processing resources are provided to the tasks not DataFlow nodes.
DataFlow implementation
There is a SynapseGrid library that is built on top of Akka Actors and allows easy construction of DataFlow systems distributed over fixed immortal Actors. It can even draw the DataFlow diagram (in .dot format) of the whole system.
(The library is more convenient to be used with Scala.)

Is performance worse when putting database to a dedicated server?

I heard that one way to scale your system is to use different machine for web server, database server, and even use multiple instances for each type of server
I wonder how could this improve performance over the one-server-for-everything model? Aren't there bottle necks in the connection between those servers? Moreover, you will have to care about synchronization while accessing the database server from different web server.
If your infrastructure is small enough then yes, 1 server for everything is (probably) the best way to do things, however when your size starts to require that you use more then 1 server, scaling the size of your single box can become much more expensive then having multiple cheaper servers. This also means that you can have more failure tolerance (if one server goes down, the other(s) can take over). As for synchronizing data, on the database side that is usually achieved by using clustering or replicating, on the application side it can be achieved with the likes of memcached or saving to the drive, and web servers themselves don't really need to be synchronized. Network bottlenecks on a local network (like your servers would be from one another) are negligible.
Having numerous servers may appear to be an attractive solution. One problem which often occurs is the latency that arises from communication between the servers. Even with fiber inter-connects it will be slower than if they reside on the same server. Of course, in a single server-solution, if one server application does a lot of work it may starve the DB application of needed CPU resources.
Another issue which may turn up is that of SANs. Proponents of SANs will say that they are just as fast as locally attached storage. The purpose of SANs is to cut costs on storage. Even if the SAN were to use the same high-performance disks as the local solution (wiping out the cost savings) you still have a slower connection and more simultaneous users to contend with on the SAN.
Conventional wisdom has it that a DB should be SQL-based with normalized data. It is worthwile to spend some time weighing pros and cons (yes SQL has cons) against each other.
Since "time-immemorial" (at least the last twenty years) indifferent programmers have overloaded servers with stuff they are too lazy to implement in the client. Indifferent (or ignorant) architects allow this practice to continue. End result: sluggish c/s implementations which are close to useless. Tripling the server park is a desperate "week-before-delivery" measure which - at best - results in a marginal performance increase. Often you lose performance instead.
DBs should not be bothered with complex requests involving multiple tables. Simple requests filtered by the client is the way to go.
One thing to try might be to put framework/SOAP-handling on one server and let it send binary requests to the DB server which answers with binary responses (trying to make sense of a SOAP request is very CPU-intensive and something which you don't want to leave to the DB application which will be more or less choked anyway). This way you'll have SOAP throttling only one part of the environment (the interface to users/other framework users) and the rest of the interfaces will be as efficient as they can be (binary).
Another thing - if the application allows it - is to put a cache front-end on the DB-application. The purpose of this cache is to do as much repetitive stuff as possible without involving the DB itself. This way the DB is left with handling fewer but (perhaps) more complicated requests instead of doing everything.
Oh, don't let clients send SQL statements directly to the DB. You'd be suprised at the junk a DB has to contend with.

Windows Azure: Parallelization of the code

I have some matrix multiplication operation. I want to parallelize the execution of those operations through multiple processors.. This can be done on high performance computing cluster using MPI (Message Passing Interface).
Like wise, can I do some parallelization in the cloud using multiple worker roles. Is there any means for doing that.
The June release of the Azure SDK and Tools, version 1.2, now supports .NET 4. You can now take advantage of the parallel extensions included with .NET 4. This includes Parallel.ForEach() and Parallel.For(), as examples.
The Task Parallel Library (TPL) will only help you on a single VM - it's not going to help divide your work across multiple VMs. So if you set up, say, a 2-, 4-, or 8-core VM, you should really see significant performance gains with parallel execution.
Now: if you wanted to divide work up across instances, you'll need to create a way of assigning work to each instance. For example: set up one worker role as a coordinator vm, and another worker role, with n instances, as the computation vm. Then, have the coordinator vm enumerate all instances of the computation vm and divide up work n ways. Send send 1/n work messages to each instance over WCF calls over an internal endpoint. Each vm instance processes each work message (potentially with the TPL as well) and stores its result in either blob or table storage, accessible to all instances.
In addition to message passing, Azure Queues are perfect for this situation as each worker role can read from the queue for work to be performed rather than dealing with iteration. This is a much less brittle approach as the number of workers may be changing dynamically as you scale.

Resources