Performance comparaison betwen ff4j and togglz - performance

We're working on a project , and we want to use some toggling feature tool like ff4j or togglz but we have a real constraints about performances, i mean we really need a tool with the less time of execution , i've checked a little bit ff4j and togglz but i don't know what is best for this solution, or may be if you know some other tools.
Context of project: its a netflix microservices architecture, so we have eureka,ribbon,zuul and microservices.
otherwise , if you have another solution , may be develop a sidecar please give me some ideas.
thank you in advance :)

Disclaimer : I created FF4j, as such I won't give you answer relative to performance comparison. I will provide architecture design principles.
Microservices means distributed architecture so you will have to store the state of your features in a common persistence storage (DB).
The cost of feature toggle framework won't be time to evaluate the feature state predicate (it is a simple condition) it will be the time to access the data from the persistence storage.
FF4j provides support for both REDIS and CONSUL:
Redis seems a good candidate as very fast for put/get and distribute.
Consul is also a good idea in distributed microservice : it provides a key-value store.
Eureka may does the same, I don't know, ff4j does not have store for it yet.
If you have to store your features in a slower DB such as SQL-Like then you might consider to use caching. FF4j provides some cacheProxy to handle such use cases.
Other Considerations :
Put the administration console only in a backend application not on each microservices (security + performance overhead)
Feature Toggle can do more with Configuration Management and monitoring.
You may want to have a look at this 15min talk exactly on that subject. LIVE DEMO starting at 7:10
and related github repository for sample with Spring-Cloud

Related

XA support for Microservices

Scenario: I have multiple XA compliant databases fronted by different microservices which perform CRUD operations on them. I need to perform a 2 phase commit among these microservices. This means that I have a server running which makes API calls into these microservices to do some update, and these updates should be transactional.
We are planning to create a transaction manager to manage this.
Question: All the available solutions like Atomikos etc. mandate the different transactions to happen on the same server but in my case these happen in different microservices.
How can we provide transaction management in this case?
Ultimately we wanted to prepare transactions and then commit them in a different session as managed by our own transaction manager.
Is that possible?
It is definitely possible (you can do xa_prepare and xa_commit on separate sessions on most, if not all by now, resource managers) but indeed in the end you will end up writing essentially a Java EE (JTA) style transaction manager with transaction context propagation over REST or messaging or whatever communication mechanism you are using. This has been done eg in Rest-AT specification that Narayana/JBoss implemented and a few others.
Weblogic has an operator with a number of years on it now that takes it into the Kubernetes space and so XA/2PC can simply continue to be used there and Tuxedo will be putting out a product to achieve the same end (over Rest).
The saga pattern should also definitely be considered. It is not to be blindly accepted nor dismissed out of hand as a great pattern/fit in the microservices space. Use cases in transaction management, like any other area, continue to be more and more optimized and specialized and so the fact that it involves eventual consistency, compensation, etc. should not be non-starter in and of itself as it has a number of significant advantages as far as deployment models, scaling, and, to your point, the removal of XA distributed locks, etc. The best solution depends on the specific use case and it's requirements.
A number of the microservices frameworks, such as Narayana (WildFly/Quarkus/SpringBoot), Helidon, and even inside the Oracle DB itself, have Saga engines now. Full disclosure, I work at Oracle and am putting out a workshop on this very product in the next few weeks which will build on the existing "Simplifying Microservices with converged Oracle Database Workshop" which has a very basic choreography-based saga (as opposed to the orchestration-based products/engines I mentioned).
Very happy to talk more on this topic as I've spent the last 25 years writing transaction managers. :)
For the scenario described in the question, you should try using Oracle Transaction Manager for Microservices (MicroTx). It is a free product that comes with a transaction manager and client library for microservices written in Java and node.js. With this, you can create XA transactions involving multiple microservices.
Oracle MicroTx - https://www.oracle.com/database/transaction-manager-for-microservices
I don't think you mean a different Session exactly, but one Application tier level transaction where application components one different server are inside the transaction boundary.
The issue that you are facing is that those who created Microservices were not aware or experienced enough with information systems to understand these scenarios.
Microservices are essentially a false generalization and derivative of a stereotyping
Transactions and many other basic concepts that historically allow Enterprise information systems to exchange information globally without proprietary vendor locking are simply not part of the Microservices understanding.
So your question is really how do you retrofit the architecture concept to do normal day-to-day computer stuff.
In the end, if you keep solving those problems, you will be back to a Java EE application server. (Spring went through the same failings and just ended up wrapping and rebranding Java EE standard functions but with more obnoxious rhetoric)
My business logic on Glassfish can talk to the business login on WebLogic and the CICS tx on the mainframe, and everyone's databases and message queues all on different servers in one transaction. The XA spec lays out how to do this.

Data Migration using Spring

We are beginning the process of re-architecting the systems within our company.
One of the key components of the work is a new data model which better meets our requirements.
A major part of the initial phase of the work is to design and build a data migration tool.
This will take data from one or more existing systems and migrate it to the new model.
Some requirements:
Transformation of data to the new model
Enrichment of data, with default values or according to business rules
Integration with existing systems to pull data
Integration with Salesforce CRM which is being introduced into the company.
Logging and notification about failures
Within the Spring world, which is the best Spring project to use as the underlying framework for such a data migration tool?
My initial thoughts are to look at implementing the tool using Spring Integration.
This would:
Through the XML or DSL, allow for the high level data flow to be seen, understood, and edited (possibly using a visual tool such as a STS plugin). Being able to view the high level flow in such a way is a big advantage.
Connectors to work with different data sources.
Transformers components to be built to migrate data formats.
Routers to route the data in the new model to endpoints which connect with systems.
However, are there other Spring projects, such as Spring Data or Spring Batch, which are a better match for the requirements?
Very much appreciate feedback and ideas.
I would certainly start with spring-integration which exposes bare bones implementation for Enterprise Integration Patterns which are at the core of most/all of your requirements listed.
It is also an exceptionally great problem modelling tool which helps you better understand the problem and then envision its implementation in one cohesive integration flow
Later on, once you have a clear understanding of how things are working it would be extremely simple to take it to the next level by introducing the "other frameworks" you mentioned/tagged adding #spring-cloud-data-flow and #spring-cloud-stream.
Overall this question is rather broad, so consider following the above pointers and get started and raise more concrete questions.

Spring in memory data grid application

Is it sensible to use Spring in the server side of an in memory data grid based application?
My gut feeling tells me that it is nonsense in a low latency high performance system. A colleague of mine is insisting on including Spring in it. What are the pros and cons of such inclusion?
My position is that Spring is OK to be used in the client but it is too heavy for the server, it brings too many dependancies and is one more leaky abstraction to think of.
Data Grid systems are memory and I/O intensive in general. Using Spring does not affect that (you may argue that Spring creates a lot of beans but with proper Garbage Collection tuning this is not a problem).
On the other hand using Spring (or any other DI) helps you structure and test your code.
So if you are using implementing some sort of server based on Data Grid systems, pay attention to properly adjusting GC, sockets in your OS (memory buffers and socket memories). Those will give you much more benefits than cutting down DI.
First, I'm surprised by the "leaky abstraction" comment. I've never heard anyone criticize Spring for this. In fact, it's just the opposite. Spring removes the implementation details of infrastructure such as data grids from your application code and provides a consistent and familiar programming model, allowing you to focus on business logic. Spring does a lot to enhance configuration and access to data grids, especially Gemfire, and generally does not create any runtime overhead per se. During initialization of a Spring application, Spring uses tools like reflection and AOP internally which may increase the start up time of an application, but this has no impact on runtime performance. Spring has been proven in many high-throughput, low-latency production applications. In extreme cases, things like network latency and serialization, concerns external to Spring, are normally the biggest factors affecting performance.
"Spring brings in too many dependencies" is a common complaint, but is a fallacy. I would say Spring brings in the exact right amount of dependencies for what it needs to do. Additionally, Spring Boot starters and the platform BOM do a lot to simplify dependency management so you don't need to worry about version incompatibilities or explicitly declaring common dependencies. I'll have to side with your colleague on this one.

Application monitoring performance in scala

I have a web service written in scala and built on top of twitter finagle RPC system. Now we are hitting some performance issues. We have external API components and database layer.
I am planning of installing Zipkin in order to have a service level tracing system. This will allow me to know where the bottleneck is at the service level.
I am wondering though if there are framework out there to monitor the performance inside my application layer. The application is a suite of filters that are applied consecutively to my data and I would like to know which filter take time to compute. I heard about JVM profiling but it seems a little overkill for what I want to do. What would you recommend ? Thanks for your help.
Well before starting digging into JVM stuff or setting up all the infrastructure needed by Zipkin you could simply start by measuring some application-level metrics.
You could try the library metrics via this scala api.
Basically you manually set up counters and gauges at specific points of your application that will help you diagnose your bottleneck problem.

Spring Integration as embedded alternative to standalone ESB

Does anybody has an experience with Spring Integration project as embedded ESB?
I'm highly interesting in such use cases as:
Reading files from directory on schedule basis
Getting data from JDBC data source
Modularity and possibility to start/stop/redeploy module on the fly (e.g. one module can scan directory on schedule basis, another call query from jdbc data source etc.)
repeat/retry policy
UPDATE:
I found answers on all my questions except "Getting data from JDBC data source". Is it technically possible?
Remember, "ESB" is just a marketing term designed to sell more expensive software, it's not a magic bullet. You need to consider the specific jobs you need your software to do, and pick accordingly. If Spring Integration seems to fit the bill, I wouldn't be too concerned if it doesn't look much like an uber-expensive server installation.
The Spring Integration JDBC adapters are available in 2.0, and we just released GA last week. Here's the relevant section from the reference manual: http://static.springsource.org/spring-integration/docs/latest-ga/reference/htmlsingle/#jdbc
This link describes the FileSucker with Spring Integration. Read up on your Enterprise Integration patterns for more info I think.
I kinda think you need to do a bit more investigation your self, or do a couple of tries on some of your usecases. Then we can discuss whats good and bad
JDBC Adapters appear to be a work in progress.
Even if there is no specific adapter available, remember that Spring Integration is a thin wrapper around POJOs. You'll be able to access JDBC in any component e.g. your service activators.
See here for a solution based on a polling inbound channel adapter too.

Resources