How to do load balancing in distributed OSGi? - osgi

We deploy two service instance in difference machine using CXF distributed OSGi. we want to give the system the load balancing feature. All we know the OSGi don't provide any load balancing feature. Does anyone know how to do it ?

Load balancing is a concern that is meant to be implemented by the Topology Manager (TM). It would be useful to read the Remote Services Admin specification, which addresses exactly this kind of question.
As far as I know, the CXF implementation of Remote Services only implements a single TM, which is "promiscuous", i.e. it publishes every available service in every listening framework. It is possible however to write your own TM to perform load balancing and failover etc.
The Remote Services spec is written in such a way that a TM implementation can be developed completely independently of any specific Remote Services implementation.

You should be able to get the complete list of services using a ServiceTracker. So a nice way to create a load balancer should be to create a proxy service yourself that does the load balancing and publish it locally as a service with a special property. So your business application can use the service without knowing anything about the details of load balancing.

Related

Running multiple Quarkus instances on one machine

I have an application separated in various OSGI bundles which run on a single Apache Karaf instance. However, I want to migrate to a microservice framework because
Apache Karaf is pretty tough to set up due its dependency mechanism and
I want to be able to bring the application later to the cloud (AWS, GCloud, whatever)
I did some research, had a look at various frameworks and concluded that Quarkus might be the right choice due to its container-based approach, the performance and possible cloud integration opportunities.
Now, I am struggeling at one point and I didn't find a solution so far, but maybe I also might have a misunderstanding here: my plan is to migrate almost every OSGI bundle of my application into a separate microservice. In that way, I would be able to scale horizontally only the services for which this is necessary and I could also update/deploy them separately without having to restart the whole application. Thus, I assume that every service needs to run in a separate Quarkus instance. However, Quarkus does not not seem to support this out of the box?!? Instead I would need to create a separate configuration for each Quarkus instance.
Is this really the way to go? How can the services discover each other? And is there a way that a service A can communicate with a service B not only via REST calls but also use objects of classes and methods of service B incorporating a dependency to service B for service A?
Thanks a lot for any ideas on this!
I think you are mixing some points between microservices and osgi-based applications. With microservices you usually have a independent process running each microservice which can be deployed in the same o other machines. Because of that you can scale as you said and gain benefits. But the communication model is not process to process. It has to use a different approach and its highly recommended that you use a standard integration mechanism, you can use REST, you can use Json RPC, SOAP, or queues or topics to use a event-driven communication. By this mechanisms you invoke the 'other' service operations as you do in osgi, but you are just using a different interface, instead of a local invocation you do a remote invocation.
Service discovery is something that you can do with just Virtual IP's accessing other services through a common dns name and a load balancer, or using kubernetes DNS, if you go for kubernetes as platform. You could use also a central configuration service or let each service register itself in a central registry. There are already plenty different flavours of solutions to tackle this complexity.
Also more importantly, you will have to be aware of your new complexities, but some you already have.
Contract versioning and design
Synchronous or asynchronous communication between services.
How to deal with security in the boundary of the services / Do i even need security in most of my services or i just need information about the user identity.
Increased maintenance cost and redundant side code for common features (here quarkus helps you a lot with its extensions and also you have microprofile compatibility).
...
Deciding to go with microservices is not an easy decision and not one that should be taken in a single step. My recommendation is that you analyse your application domain and try to check if your design is ok to go with microservices (in terms of separation of concenrs and model cohesion) and extract small parts of your osgi platform into microservices, otherwise you mostly will be force to make changes in your service interfaces which would be more difficult to do due to the service to service contract dependency than change a method and some invocations.

Spring Boot Microservices load balancing vs cloud load balancing

I am new to Microservices. (Learning phase). I have a question. We deploy microservices at cloud. (e.g. AWS). Cloud already provide load balancing and logs. And We also implement Load Balancing(Ribbon) and logs(Rabbit MQ and Zipkin) in Spring Boot.
What is the difference in these two implementation? Do we need both?
Can some answer these questions.
Thanks in advance.
Ribbon is a client side load balancer which means there is no any other hop in between your client and service. Basically you keep and maintain a list of service on your client.
In AWS load balancer case you need to make another hop in between the client and server.
Both have advanges and disadvantages. Former has the advantage of not having any dependency to any specific external solution. Basically with ribbon and service discovery like eureka you can deploy your product to any cloud provider or on-premise setup without additional effort. Latter has advantage of not needing an extra component of service discovery or keeping the cache of service list on client. But it has that additional hop which might be an issue if you are trying to run an very high-load system.
Although I don't have much experience with AWS CloudWatch what I know is it helps you to collect logs to a central place from different AWS components. And that is what you are trying to do with your solution.

RESTful Microservice failover & load balancing

At the moment we have some monolithic Web Applications and try to transfer the projects to an microservices infrastructure.
For the monolithic application is an HAProxy and Session Replication to have failover and load balancing.
Now we build some RESTful microservices with spring boot but it's not clear for me what is the best way to build the production environment.
Of course we can run all applications as unix services and still have a reverse proxy for load balancing and failover. This solution seems very heavy for me and have a lot of configuration and maintenance. Resource Management and scaling up or down servers will be always a manually process.
What are the best possibilities to setup production environment with 2-3 Servers and easy resource management?
Is there some solution the also support continuous deployment?
I'd recommend looking into service discovery. Netflix descibes this as:
A Service Discovery system provides a mechanism for:
Services to register their availability
Locating a single instance of a particular service
Notifying when the instances of a service change
Packages such as Netflix's Eureka could be of help. (EDIT - actually this looks like it might be AWS specific)
This should work well with continuous delivery as the services can make themselves unavailable, be updated and then register availability again.

Use EIP and integration solutions to distribute layers on cloud?

I want to adopt a solution of EIP for cloud deployment for a web application:
The application will be developed in such an approach that each layer (e.g. data, service, web) will come out as a separate module and artifact.
Each layer has the opportunity to deployed on a different virtual resource on the cloud. In this regards, web nodes will in a way find the related service nodes and likewise service nodes are connected to data nodes.
Objects in the service layer provide REST access to the services in the application. Web layer is supposed to use REST services from service layer to complete requests for users of the application.
For the above requirement to deliver a "highly-scalable" application on the cloud, it seems that solutions such as Apache Camel, Spring Integration, and Mule ESB are of significant options.
There seems to be other discussion such as a question or a blog post on this topic, but I was wondering if anybody has had specific experiences with such a deployment scheme on "the cloud"? I'd be thankful for any ideas and sharing experiences. TIA.
To me this looks a bit like overengineering. Is there a real reason that you need to separate all those layers? What you describe looks a lot like the J2EE applications from some years ago.
How about deploying all layers of the application onto each node and just use simple Java calls or OSGi services to communicate.
This aproach has several advantages:
Less complexity
No serialization or DTOs
Transactions are easy / no distributed transactions necessary
Load Balancing and Failover is much easier as you can do it on the web layer only
Performance is probably a lot higher
You can implement such an application using spring or blueprint (on OSGi).
Another options is to use a modern JavaEE server. If this is interesting to you take a look at some of the courses of Adam Bien. He shows how to use JavaEE in a really lean way.
For communicating between nodes I have good experiences with Camel and CXF but you should try to avoid remoting as much as possible.

Spring RMI load balancing / Scalability

I am looking to implement a web application in which the end user is likely to cause invocation of business logic methods which are both cpu heavy and require a fair amount of memory to run.
My initial thought is to provide these methods as part of a standalone stateless business service, which can run on a separate machine to the web application. This can then be horizontally scaled as much as I need.
As these service methods are synchronous I am opting to us RMI as opposed to JMS.
My first question is if the above approach seems viable or seems to be good, or if my though process has got lost somewhere (this will be the first time I don't work on a standalone application).
Should that be the case I have been looking at spring RMI which seems to do an excellent job of exposing remote services non-intrusively. However I am unsure as how I could use this API to load balance between multiple servers. Are there any ways of doing this using spring or do I need a seperate API?
JBoss has the ability provide RMI proxies that are automatically load-balanced: http://docs.jboss.org/jbossas/jboss4guide/r4/html/cluster.chapt.html

Resources