Could you please help me to understand how we can achieve multi tenancy using Kyma?
If we want to migrate our existing cloud applications we need to support multi tenancy as they all supports it.
There is no special support for multitenancy in Kyma. It is up to application developer to decide what pattern to use. You can have multitenancy on infrastructure layer (kubernetes cluster per customer), or on application layer (single instance but separate storage for customers). Separate cluster per customer is the default solution for enterprise customers. For smaller customers, you share infrastructure usually.
Related
I have an application separated in various OSGI bundles which run on a single Apache Karaf instance. However, I want to migrate to a microservice framework because
Apache Karaf is pretty tough to set up due its dependency mechanism and
I want to be able to bring the application later to the cloud (AWS, GCloud, whatever)
I did some research, had a look at various frameworks and concluded that Quarkus might be the right choice due to its container-based approach, the performance and possible cloud integration opportunities.
Now, I am struggeling at one point and I didn't find a solution so far, but maybe I also might have a misunderstanding here: my plan is to migrate almost every OSGI bundle of my application into a separate microservice. In that way, I would be able to scale horizontally only the services for which this is necessary and I could also update/deploy them separately without having to restart the whole application. Thus, I assume that every service needs to run in a separate Quarkus instance. However, Quarkus does not not seem to support this out of the box?!? Instead I would need to create a separate configuration for each Quarkus instance.
Is this really the way to go? How can the services discover each other? And is there a way that a service A can communicate with a service B not only via REST calls but also use objects of classes and methods of service B incorporating a dependency to service B for service A?
Thanks a lot for any ideas on this!
I think you are mixing some points between microservices and osgi-based applications. With microservices you usually have a independent process running each microservice which can be deployed in the same o other machines. Because of that you can scale as you said and gain benefits. But the communication model is not process to process. It has to use a different approach and its highly recommended that you use a standard integration mechanism, you can use REST, you can use Json RPC, SOAP, or queues or topics to use a event-driven communication. By this mechanisms you invoke the 'other' service operations as you do in osgi, but you are just using a different interface, instead of a local invocation you do a remote invocation.
Service discovery is something that you can do with just Virtual IP's accessing other services through a common dns name and a load balancer, or using kubernetes DNS, if you go for kubernetes as platform. You could use also a central configuration service or let each service register itself in a central registry. There are already plenty different flavours of solutions to tackle this complexity.
Also more importantly, you will have to be aware of your new complexities, but some you already have.
Contract versioning and design
Synchronous or asynchronous communication between services.
How to deal with security in the boundary of the services / Do i even need security in most of my services or i just need information about the user identity.
Increased maintenance cost and redundant side code for common features (here quarkus helps you a lot with its extensions and also you have microprofile compatibility).
...
Deciding to go with microservices is not an easy decision and not one that should be taken in a single step. My recommendation is that you analyse your application domain and try to check if your design is ok to go with microservices (in terms of separation of concenrs and model cohesion) and extract small parts of your osgi platform into microservices, otherwise you mostly will be force to make changes in your service interfaces which would be more difficult to do due to the service to service contract dependency than change a method and some invocations.
We are rewriting legacy app using microservices. Each microservice has its own DB. There are certain api calls that require to call another microservice and persist data into both DBs. How to implement distributed transaction management effectively in this case?
Since we are not migrated completely to the new micro services environment, we still writeback data to old monolith. For this when an microservice end point is called, we call monolith service from microservice api to writeback same data. How to deal with the same problem in this case as well.
Thanks in advance.
There are different distributer transaction frameworks usually included and maintained as part of heavy application servers like JBoss and WebLogic.
The standard usually used by such services is Jakarta Transactions (JTA; formerly Java Transaction API).
Tomcat and Spring don't support distributed transactions out-of-the-box. You can add this functionality using third party framework like Atomikos (just googled, I've never used it).
But remember, microservice with JTA ist not "micro" anymore :-)
Here is a small overview over available technologies and possible workarounds:
https://www.baeldung.com/transactions-across-microservices
If you can afford to write to the legacy system later (i.e. allow some latency between updating the microservice and the legacy system) you can use the outbox pattern.
Essentially that means that you write to the microservice database in a transactional way both to the tables you usually write and an additional "outbox" table of changes to apply and then have a separate process that reads that table and updates the legacy system.
You can also achieve something similar with a change data capture mechanism on the db used in the microservice(s)
Check out this answer on "Why is 2-phase commit not suitable for a microservices architecture?": https://stackoverflow.com/a/55258458/3794744
We would like to implement a multi-tenant solution for SCDF for which each tenant may have unique task definitions / etc. Ideally we only want a single SCDF server (as opposed to setting up an SCDF server for each tenant), as pictured:
Is this possible or is the only way to achieve isolation of the data between tenants to have separate data flow server instances?
What you're attempting here is not possible today. You'd have to provision SCDF for each tenant. In cloud platforms like Kubernetes or Cloud Foundry, it is recommended because you can access-control the tenants through "namespace" and "org/space" isolation respectively. On this foundation, the platforms provide a more robust separation through RBAC assignments for each user in the Tenant.
A little bit of more background as to why we do this today. SCDF and the Task/Job repositories are coupled in the sense that the Dashboard and the other client tools interact with the same datasource to provide the consistent UX to monitor and manage the data pipelines centrally. With the recent multi-platform backends support for Tasks, you're still expected to use a common datasource in the current design.
All that said, we are looking into improving to allow users to have a database with schemas prefixed with an identifier [see: spring-cloud/spring-cloud-dataflow#2048]. With that in place, it would be possible to then filter by the identifier-specific task/job executions and likewise track them as isolated units of operations within the single SCDF instance.
However, it may not scale for cloud deployments. Each of the tenant isolation boundaries, for instance, a "namespace" in Kubernetes needs to have enough resources (cpu/memory/disk) to handle "multiple" tenant deployments of task/batch apps. If you don't autoscale the resource capacity, you'd have deployment failures.
Maybe you could help with describing your requirements in some more detail, so we could relate to why this could still be useful. Please also share how you're going to design the resource allocations in the underlying deployment platform - feel free to comment in #2048.
I currently have a set of ERP style web applications build on top spring 3. The application is deployed into tomcat 7.
The system developed some time ago without a cleanly defined architecture. Each Application has 3 parts (as sub projects). API: Defining Models and Interfaces, IMPL: The Service Layer, and the WEB.
Current layout of system is as below.
Financial API+IMPL is included to Inventory module for achieving transaction management. We previously tried to separate inventory and financial into different Web apps by using REST calls but faced issue with transaction management. What we are currently doing is #Autowiring the financial Impl directly to Inventory services. For instance, When a Sale invoice is made, both financial and inventory operations must be in the same transaction.
Now, as already expected, issues with this approach pops up. Inventory system is super heavy as needs to boot a duplicate Financial Layer.
I guess introducing a Messaging Middleware like HornetQ or ActiveMq would be the best thing to do here as below.
My questions are:
How can I achieve centralized transaction management between Financial and Inventory?
Is there a benefit here if I use a Java EE Server like jBoss?
If I move one of the application into another jboss instance running in another server, Can I still have centralized transactions?
I want to adopt a solution of EIP for cloud deployment for a web application:
The application will be developed in such an approach that each layer (e.g. data, service, web) will come out as a separate module and artifact.
Each layer has the opportunity to deployed on a different virtual resource on the cloud. In this regards, web nodes will in a way find the related service nodes and likewise service nodes are connected to data nodes.
Objects in the service layer provide REST access to the services in the application. Web layer is supposed to use REST services from service layer to complete requests for users of the application.
For the above requirement to deliver a "highly-scalable" application on the cloud, it seems that solutions such as Apache Camel, Spring Integration, and Mule ESB are of significant options.
There seems to be other discussion such as a question or a blog post on this topic, but I was wondering if anybody has had specific experiences with such a deployment scheme on "the cloud"? I'd be thankful for any ideas and sharing experiences. TIA.
To me this looks a bit like overengineering. Is there a real reason that you need to separate all those layers? What you describe looks a lot like the J2EE applications from some years ago.
How about deploying all layers of the application onto each node and just use simple Java calls or OSGi services to communicate.
This aproach has several advantages:
Less complexity
No serialization or DTOs
Transactions are easy / no distributed transactions necessary
Load Balancing and Failover is much easier as you can do it on the web layer only
Performance is probably a lot higher
You can implement such an application using spring or blueprint (on OSGi).
Another options is to use a modern JavaEE server. If this is interesting to you take a look at some of the courses of Adam Bien. He shows how to use JavaEE in a really lean way.
For communicating between nodes I have good experiences with Camel and CXF but you should try to avoid remoting as much as possible.