Difference between OSGi services and REST microservices [closed] - osgi

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
OSGi talks about microservices and the press talks about microservices. However, they do not seem to be the same. What is the difference between these microservices.

OSGi and microservices share the same architectural style but differ in their granularity. We actually used to call OSGi services microservices until the web stole that name. We now sometimes call them nanoservices.
The principle of (micro|nano)services is to tunnel the communications between modules through a gate with a well defined API. Since an API is, or at least should be, independent of the implementations you can change one module without affecting the other modules. One of the most important benefits is that designs of even large systems can remain understandable when looking at the service diagram. In a way, a service based design captures the essence of the system, leaving the details for the modules.
With web/micro services the gate is a communication endpoint (host:port for example) and protocol (REST for example). The API is defined informally or with something like Swagger/OpenAPI or SOAP.
OSGi defines a (nano) service as an object that is made available to other modules (bundles) to use. Java is used to define the API.
Since nanoservices are OSGi most important design primitive there is a lot of support to make them easy to work with. Interestingly, since the service registry is dynamic and reflective, it is quite straightforward to map a nanoservice to a microservice and vice versa. The OSGi Alliance standardized this in their model for distributed OSGi, the "Remote Service Admin". This specification allows you to take an OSGi nanoservice and map it to REST, SOAP, or other protocols.
Therefore, choosing OSGi allows you not only to defer the decision to support microservices, it also allows you to add microservices to your system afterwards. Having a unified architectural style for the most basic functions as well as the highest level functions makes systems easier to understand as well scale.

I don't think you're comparing apples to apples here. OSGI is an application architecture, while microservices is a distributed systems concept.
In my experience, microservices offer a number of benefits:
Individual microservices are easy to deploy, test, and maintain.
Microservices are language agnostic. That means you could write one microservice in python, another in javascript, a third in go, and a yet another in java.
Microservices are easy to scale individually. That means that if one type of request is made more often than others, you could scale the microservice you need to, without scaling anything else in the system.
Each microservice in your system owns its own data. This ensures clear boundaries and separation of concerns.
However, they also have some drawbacks:
There are more infrastructure concerns when deploying.
It's difficult to keep messaging between microservices clean and efficient.
It's harder to do end-to-end testing on a system with many moving parts.
There's more overhead in messaging. Instead of a call to another service being a direct method call, it needs to use HTTP or some other form of network communication.
There's a good article describing some of the differences here.

Related

REST API uses which design pattern

I have the below question asked in the interview but could not able to find any straight answers to the below questions.
What design pattern do we follow while creating the REST API in Spring boot application?
Microservices uses which design pattern?
for your question one, we're using the (MVC) pattern to create a REST API but in the REST API we're not using View instead we're using a JSON responses, and also other patterns that may be used while building REST API including Command Query Responsibility Segregation (CQRS) pattern, its also worth to be mentioned that Microservices also use this pattern which we mentioned now since the Microservices uses an architectural style that involves building a large, complex application as a collection of small, independent services, The design pattern that is commonly used when building microservices is the Command Query Responsibility Segregation (CQRS) pattern because This pattern involves separating the responsibilities of reading and writing data into separate components which gives single responsibility to the component as they already are.
The questions that the interviewer has covered are fairly broad questions. Anyway, I believe that it is important to show your basic knowledge based on the Spring MVC pattern, and how the embedded Tomcat Servlet container in the Spring Boot operates. (So basically it is the main role of the Spring Boot Controller)
In the Spring MVC, you use various controllers to handle HTTP requests and create REST APIs, by adding spring-boot-starter-web dependency. This dependency includes some key libraries including the Spring MVC framework and the Tomcat servlet container. Then, you got two options either #Controller or #RestController, to create your own servlet for your web application.
Since the interviewer is asking about REST API design, I would prefer to use
#RestController, because this annotation is capable to produce RESTful response entities.
As for the second question, I am prudent to answer it, because Microservice Architecture is a type of "Architecture Pattern", which is a more complicated and sophisticated backend structure for the business service. Overall, I believe that the Event-driven design pattern is a good option as a fundamental one for implementing a successful MSA.
In the MSA, the event-driven design pattern is useful for enabling communication between different microservices projects. Because microservices are designed to be small, independent, and loosely coupled, they often communicate with each other using asynchronous messages(a.k.a. events). By using an event-driven design pattern, you can create a publish-subscribe model of communication between microservices, which can make it easier to scale the system and add new microservices over time.
BUT don't forget that MSA contains various design patterns that are useful as well!

Circular dependency of client libraries

I am working on 2 micro-services currently, microservice A includes microservice B's SDK to call B's API and access some entity classes.
Similarly microservice B also includes microservice A's SDK and accesses microservice A's enitity classes.
Now I am facing one issue when I need to bump up the version of microservice B in microservice A and vice versa.
How should I solve this problem?
You've already broken the cardinal rule of microservices by tightly coupling the two services.
The right answer here is going to be to refactor these services such that they properly and completely encapsulate functionality. That could involve combining them (if B is always and completely dependent on A, they may not really be separate services), splitting them out into more services, or just shifting responsibility.
But, the path you're on with tight-coupled microservices leads to a distributed monolith which is unlikely to provide the benefits you're after (specifically including the co-dependent revision concern you mention here).
Here's a good answer to a related question that might offer some more insight.

Separate microservice just for microservices orchestration?

I have a few microservices where each microservice has REST endpoints for CRUD operations.
I have to create a workflow that will start from one microservice with some initial input, but later outputs from a microservice can be used as input to other microservices. There can be some synchronous and asynchronous calls to these REST APIs.
I have looked for some of the workflow-engines but I do not think that I can create my workflow without writing any java code.
Should I write a separate microservice just for microservices orchestration? This orchestration microservice will know the exact workflow and can be configurable for inputs required to start the workflow, and it can also use some third-party workflow engines like Camunda to store the definition of the workflow.
Is this correct thinking to have a separate microservice just for microservices orchestration? Till now the existing microservices have no idea about the other microservices. There could be a chance that output from one microservice needs to be massaged before using as input for other microservice.
I have looked for some of the workflow-engines but I do not think that
I can create my workflow without writing any java code.
This depends on your business processes and the complexity of your workflow. Usually yes you will need to write some code to achieve it.
Should I write a separate microservice just for microservices
orchestration? This orchestration microservice will know the exact
workflow and can be configurable for inputs required to start the
workflow, and it can also use some third-party workflow engines like
Camunda to store the definition of the workflow.
Yes you can do that. I did something similar on a system using micro-services. This would be a very good Idea on the long run as you could configure your workflow based on environments as well. For example on your development machine you would have a little different workflow/configuration. This is practical for Developers or QA's testing their solutions. On the other hand on Staging/Production you can pre-define Customer setups/orchestration which you can reuse any time if you get new customers or users.
Is this correct thinking to have a separate microservice just for
microservices orchestration? Till now the existing microservices have
no idea about the other microservices. There could be a chance that
output from one microservice needs to be massaged before using as
input for other microservice.
Yes you can do that without problems although I would be careful with the name orchestration as this has another meaning in context in micro-service architecture(Docker, Docker-Swarm, Kubernetes). Similar examples would be some kind of EndToEndTest or Cross micro-service testing-micro-service. That would test cross micro-service business operations and assert the results. Usually business operations involve more then 1 micro-service so in order to test that you can use this approach. This micro-service would call APIs from multiple micro-services and test the results and scenarios based on your Business rules. Another example would be something like seeder-micro-service(which seems to be very similar to what you are trying to do here). This seeder-micro-service would be responsible for seeding(creating) test data to your micro-services. This test data is some basic setup/configuration data which you need in order to have your micro-service business processes to work. This is very handy for development machines or some test environments where you need to quickly setup an environment. Using this seeder-micro-service you can easily setup do your work or tests and dispose the environment(data) as you need it. This is especially useful for development machines setups but it can also be used on shared test environments and etc. Both of those examples are micro-services which server your needs and make your life easier to work with your system.
One final note regarding this:
Till now the existing microservices have no idea about the other
microservices.
They should be abstracted from each other in a way that they are not aware of internal implementation or data(separate databases) but they should communicate between each other in order to perform business operations which sometimes are cross micro-services. Like the typical example of payment-micro-service and order-micro-service from an online shop example. So it is fine that they know about each other and communicate but this communication has to be very carefully designed in order to avoid some common pitfalls.
They usually communicate with each other with direct calls over HTTP or some other protocol or through some message queue like Apache Kafka or RabbitMq or others. You can read more about it in this answer.
Yes, you should cover the orchestration part in a separate service. And, yes, go with a BPMN 2 process engine as orchestrator, as you already suspected. Yes, this may include writing a little code mostly for data mapping or connectors.
Benefits include for instance ootb support for:
state management and long running processes / persistence for data
versioning (!)
retries and error handling
tooling to modify state and date in case something went wrong
timeouts, parallel execution (if necessary)
scalability
graphical process model
audit trail
and end to end visibility in monitoring tools based on BPMN 2 model
ability to include business rules tasks (DMN) for more complex rules
combination of push and pull communication pattern and a/sync communication
business-IT alignment via BPMN 2
support for the various BPMN 2 events
standardization (skills, security, software quality, features)
...
This is a great related article about the WHY using an airline ticket booking as an example:
https://blog.bernd-ruecker.com/3-common-pitfalls-in-microservice-integration-and-how-to-avoid-them-3f27a442cd07
This is about the important design consideration in case you go with a process engine:
https://blog.bernd-ruecker.com/the-microservice-workflow-automation-cheat-sheet-fc0a80dc25aa
I faced a similar problem as in the original question. I'd a few microservices with simple dependencies that needed to be managed and did go down the path of writing my own microservice https://github.com/pedro-r-marques/workflow to manage the dependencies and do the orchestration. It uses a yaml definition file to describe the dependencies and rabbitmq for message passing. One could also replace the usage of rabbitmq by REST API calls front-ended by a load-balancer.

Component-based application with scalability in mind: OSGi or Akka?

For my master thesis I'm developing an application framework for selling tickets for large events. My main requirements are modifiability, scalability and performance. My clients (event organisers) should be able to easily replace a component at runtime and add functionality. An example of such a component could be the seat assignment component.
My mentors said to look at OSGi. The idea of loosely coupled bundles is certainly appealing. When looking for alternatives I discovered Akka. This framework promises a lot of things, like scalability and high performance. I wondered if Akka's concept of actors suits my modifiability requirements. Akka seems more productive than OSGi, so development would be faster. Akka also seems more fit for scalability. With OSGi I would have a harder time.
If you have experience in both OSGi and Akka, which would you recommend for me? What are the pro and cons of both technologies when comparing them? And finally, are there are any good alternatives to OSGi or Akka that cover my requirements?
EDIT
First, thank you for the replies so far, you're a great help.
As mentioned below, I'm trying to compare apples and pears. A more logical question would be: How can OSGi and Akka be used together and benefit from each other? How is this structured? Do all your actors reside in one OSGi bundle, do they each get a separate bundle, is there a hybrid solution or isn't there really a 'right' way to do it?
EDIT bis
I posted a follow up question here, asking how to combine OSGi and Akka.
As Peter says they are not directly comparable. In fact you can use them together and they should be quite complementary.
Akka provides an asynchronous communications API. OSGi provides a modular, service-oriented framework. There is nothing in Akka, for example, that would solve the problem of isolating modules so that they cannot have visibility of each others' internals. Likewise there is nothing in OSGi quite like the async communications provided by Akka. So use them together and you get the best of both worlds...
OSGi does have synchronous Services, which are the principal method of communication between modules in a single JVM. OSGi also has a Remote Services layer that can be used for communication between remote machines. This is probably the area where OSGi and Akka most directly overlap, I suppose. But even here I think there is potential for cooperation. For example, OSGi Remote Services has a really powerful discovery mechanism that allows us to advertise capabilities on the network. You could possibly use this discovery to find Akka actors that are available for you to communicate with.
I'm not aware of anybody actually working on this as present, so I think that exploring and expanding on this idea would be an excellent topic for a master's thesis!
Which university do you attend? The OSGi Alliance is very interested in fostering links with the academic community, perhaps we could set up an online meeting with you and your professor?
I think you compare apple and pears. You can run Scala code on OSGi (though their binary compatibility is horrible).
Scala is a programming language, and Akka a messaging library. OSGi is a dynamic component system. So not sure how you can compare them
I agree with both Neil and Peter, you're asking us to make a comparison between apples and oranges. You can use both frameworks together. I'm currently working on a project with the same main requirements you specified. I'm creating a prototype that demonstrates using both technologies together, OSGi to provide modularity and updatability, and Akka to provide scalability and performance.
If you would like to see both frameworks working together you can play with the sample application I posted on github.
They aren't an apples to apples comparison. They are Orthogonal to each other and, if anything, complement each other. Use both.

Is PRISM a form of SOA?

Is PRISM a form of Service Oriented Archetecture?
From Wikipedia:
"Service-oriented architecture (SOA) is a flexible set of design principles used during the phases of systems development and integration in computing. A system based on a SOA will package functionality as a suite of interoperable services that can be used within multiple separate systems from several business domains."
(Preemptive snarky comment: I know, Wikipedia. Sometimes, its just the easiest thing to use as a resource.)
I think the key distinction here is that a SOA implies interaction between discrete systems over some medium. That interaction isn't necessarily defined, but the implicit assumption is that the systems are independent and use a communication mechanism to obtain services.
As a framework, Prism requires you to not be independent, i.e., it does not expose its services through some external interaction mechanism. You can't use SOAP or XML to subscribe or receive an event through IEventAggregator. That isn't Prism's purpose: it's used to build applications that may, in turn, be SOA (or not).
That being said, it obviously uses principles inherent in SOA through its usage of dependency injection containers. The fact that, in your application, you can ask for a Prism service (IRegionManager, IEventAggregator, etc.) through the container without actually worrying about the construction of the service yourself implies a service-oriented design. Of course, you do construct the service - but it happens "under the hood" in the bootstrapper. But you do have to be tightly coupled to Prism in order to get the service, and you're only going to get the service through code. Not over any medium.
(Although, there are people who have looked at exposing those services over mediums, such as PRISM and WCF - Do they play nice?, wherein IEventAggregator is exposed over WCF)

Resources