Spring Integration as embedded alternative to standalone ESB - spring

Does anybody has an experience with Spring Integration project as embedded ESB?
I'm highly interesting in such use cases as:
Reading files from directory on schedule basis
Getting data from JDBC data source
Modularity and possibility to start/stop/redeploy module on the fly (e.g. one module can scan directory on schedule basis, another call query from jdbc data source etc.)
repeat/retry policy
UPDATE:
I found answers on all my questions except "Getting data from JDBC data source". Is it technically possible?

Remember, "ESB" is just a marketing term designed to sell more expensive software, it's not a magic bullet. You need to consider the specific jobs you need your software to do, and pick accordingly. If Spring Integration seems to fit the bill, I wouldn't be too concerned if it doesn't look much like an uber-expensive server installation.

The Spring Integration JDBC adapters are available in 2.0, and we just released GA last week. Here's the relevant section from the reference manual: http://static.springsource.org/spring-integration/docs/latest-ga/reference/htmlsingle/#jdbc

This link describes the FileSucker with Spring Integration. Read up on your Enterprise Integration patterns for more info I think.
I kinda think you need to do a bit more investigation your self, or do a couple of tries on some of your usecases. Then we can discuss whats good and bad

JDBC Adapters appear to be a work in progress.
Even if there is no specific adapter available, remember that Spring Integration is a thin wrapper around POJOs. You'll be able to access JDBC in any component e.g. your service activators.
See here for a solution based on a polling inbound channel adapter too.

Related

Dynamic Camel route configuration at deployment time: Java DSL or XML DSL?

Let me preface this with the fact that I am still very new to Apache Camel. I'm still trying to understand how it all works, and what needs to be done (and HOW to do it) to achieve a particular effect.
I am trying to develop a Spring Boot application that will use Apache Camel to handle the transmission (and possibly also receipt) of data to/from a number of possible sources and destinations. The purpose of the application is to provide a means to produce/generate network traffic, at the network application level, that will be fed into another Spring Boot application - let's call this the target. We are trying to observe and measure the effects various network loads have on the target.
We would like to be able to transmit data via a number of protocols, including: ftp, http/s, file systems (nfs), various mail protocols (smtp, pop) and data streaming protocols for voice and video. There may be other protocols added at a later time. The data itself is irrelevant, we just need to be able to transmit data via various protocols with various loads.
These applications/services will be running in a containerized environment (Docker) that will be run within our local development and test environment, as well as possibly in a cloud environment, such as AWS. We have used Docker, Ansible, Terraform and are currently working towards using Kubernetes and Istio to manage the configuration, deployment, and operation of these applications.
We need to be able to provide specific configurations of Camel routes for particular deployments.
It would appear that the preferred method to configure Camel routes is via Java DSL, rather than XML DSL. The Camel documentation and nearly every other source of information I've found have a strong bias towards using Java DSL. Examples of XML DSL route configuration are far and few.
My initial impression is that going the Java DSL route (excuse the pun), would not work well with our need to be able to deploy a Camel application with a specific route configuration. It seems like you are required to have Java DSL defined route configurations hardwired into the code.
We think that it will be easier to provide a specific route configuration via an XML file that can be included in a deployment, hence why I've been trying to investigate and experiment with XML DSL. Perhaps we are mistaken in this regard.
My question to the community is: Considering what I've described above, can the Java DSL approach be used to meet the requirements as I've described them? Can we use Java DSL in a way that allows for dynamic route configuration? Keep in mind we would not be attempting to change configuration during operation, just in the course of performing a deployment.
If Java DSL could be used for this purpose, it would be very much appreciated if pointers to documentation, examples, etc. could be provided.
For your use cases you could use XML DSL also. Anyhow below book covers most aspects Camel development with examples. In this book authors describes XML DSL use for most of java DSL examples.
https://www.manning.com/books/camel-in-action-second-edition
In below github repository you can find the source code for all the examples listed in above book.
https://github.com/camelinaction/camelinaction2
Simple tutorial and github repository for Apache Camel using Spring boot.
https://www.baeldung.com/apache-camel-spring-boot
https://github.com/eugenp/tutorials/tree/master/spring-boot-modules/spring-boot-camel
Maven Plugin for build and deployment of spring boot container application into Kubernetes cluster
https://maven.fabric8.io/
In case if your company can afford some funding for your effort look at below link which provides commercial offerings around Camel.
https://camel.apache.org/manual/latest/commercial-camel-offerings.html
Thanks
Madhu Gupta
Our team has a few projects which use the Java DSL for building routes. In order to make them dynamic, there are control structures for iterating and setting endpoints based off configurations. That works for us because the routes are basically all the same, just with different sources and sinks.
If you could dynamically add/change the XML DSL files in a way that doesn't involve redeploying your application, that might be a viable route to follow. One might, for example, change the camel.springboot.xml-routes property to point to a folder which changes as needed.

Spring boot 2.1.5, WebFlux, Reactor: How to deal properly with MDC

Spring boot 2.1.5
Project Reactor 3.2.9
I am setting up a bunch of rest reactive APIs using the above-mentioned frameworks and I am running into an annoying problem with MDC (mapped diagnostic context). My applications are in JAVA.
MDC relies on thread locals to store the current query's mapped context to put in the logs. That system, obviously, is not perfect and contradicts the reactive pattern since the different steps of your execution will be executed through different threads.
I have run into the same problem with the Play Reactive framework but found a workaround there by copying the mapped context transparently from one actor to another.
For spring and reactor, I could not find a satisfying solution yet.
Some random examples found on the internet:
First - It works but forces you to use a bunch of utility methods
Same thing
Second - It tries to copy the context during the onNext publisher event but seems to lose some features on the way of doing that. The signal context, for example, is lost.
I am in need of a proper solution to deal with this:
A library which would make the link between MDC and reactor?
A way to tweak reactor/spring to achieve it transparently?
Any advice?
"I could not find a satisfying solution yet."
Working with contexts is the only solution for the moment. Since as you said threadlocals goes against everything that has to do with reactive programming. Using thread local as a storage point during a request is a resource heavy way of solving things and in my opinion poor design. Unless logging frameworks themselves come up with a better solution to the problem we developers must pass the data through the context to accommodate for the logging frameworks blocking nature.
Reactive programming is a paradigm shift in the programming world. Other things like database drivers, that use threadlocal to rollback transactions are also in big trouble. the JDBC database driver spec is defined as blocking in nature, and atm. there has been attempts by spring and the R2DBC project to define a new JDBC driver spec that is inherently non/blocking. This means that all vendors must rewrite ther database driver implementations from scratch.
Reactive program is so new that lots of libraries need to rewrite entire codebases. The logging frameworks as we know it needs to be rewritten from the ground up which is a huge task. And the context in reactive is actually something that should not even be in reactive programming, it was implemented just to accommodate for MDC problems.
It's actually a lot of overhead needing to pass data from thread to thread.
So what can we do?
push on logging frameworks, and/or help logging frameworks to rewrite their codebase
Accept that there is no "tweak" that will magically fix this
use the context and the way suggested in the blogposts
Project reactor context

Data Migration using Spring

We are beginning the process of re-architecting the systems within our company.
One of the key components of the work is a new data model which better meets our requirements.
A major part of the initial phase of the work is to design and build a data migration tool.
This will take data from one or more existing systems and migrate it to the new model.
Some requirements:
Transformation of data to the new model
Enrichment of data, with default values or according to business rules
Integration with existing systems to pull data
Integration with Salesforce CRM which is being introduced into the company.
Logging and notification about failures
Within the Spring world, which is the best Spring project to use as the underlying framework for such a data migration tool?
My initial thoughts are to look at implementing the tool using Spring Integration.
This would:
Through the XML or DSL, allow for the high level data flow to be seen, understood, and edited (possibly using a visual tool such as a STS plugin). Being able to view the high level flow in such a way is a big advantage.
Connectors to work with different data sources.
Transformers components to be built to migrate data formats.
Routers to route the data in the new model to endpoints which connect with systems.
However, are there other Spring projects, such as Spring Data or Spring Batch, which are a better match for the requirements?
Very much appreciate feedback and ideas.
I would certainly start with spring-integration which exposes bare bones implementation for Enterprise Integration Patterns which are at the core of most/all of your requirements listed.
It is also an exceptionally great problem modelling tool which helps you better understand the problem and then envision its implementation in one cohesive integration flow
Later on, once you have a clear understanding of how things are working it would be extremely simple to take it to the next level by introducing the "other frameworks" you mentioned/tagged adding #spring-cloud-data-flow and #spring-cloud-stream.
Overall this question is rather broad, so consider following the above pointers and get started and raise more concrete questions.

Spring Gemfire Cache implementation

I am trying to implement cache mechanism provide by Spring Data GemFire. Has anyone implemented a solution? I need to check on performance and ease to implement it.
Sonal-
First, you can find plenty of examples in the Spring User Guides, here, for example...
Accessing Data with GemFire,
Caching Data with GemFire, and ...
Accessing GemFire Data with REST
Additionally, there is a Spring GemFire Examples project here.
I have also started work on building a "Reference Implementation" (RI) for Spring Data GemFire/Geode, here. I have much work to do with this project yet, like documentation (READMEs) in the Repo, but I do plan to keep it up-to-date with my latest developments since I use the code as a basis for all my conference talks. Anyway, there is plenty of code examples and tests in this GitHub project to keep you busy for awhile.
Then, the Spring Data GemFire and Spring Data Geode GitHub projects themselves, have plenty of tests to show you how to address different application concerns (Configuration, Data Access, Function Execution, etc, etc).
Of particular interests might be the new Annotation-based configuration model for SDG^2 that I am working on. This is currently a WIP and I am also working on User Guide documentation for this feature/functionality, but it is established and even inspired by the auto-configuration features and Annotations provided by Spring and Spring Boot (e.g. #EnableXYZ).
Users have even started using the Annotation-based configuration model without significant documentation in place since it builds on concepts already available and familiar in Spring Boot. In fact combining these SDG specific Annotations with Spring Boot makes for a very powerful combination while preserving simple/easy nature to get started, 1 of my primary goals.
Given the lack of documentation yet, you can find more out in the Spring IO blog, where I first blogged about it here. Then I expanded on this article in a second blog, talking specifically about security.
And if you are really curious, you can follow the latest developments of the Annotation configuration model in my testing efforts.
Finally, of course, as I have already alluded to, as any good developer knows, getting started is as easy following the examples and reviewing Spring Data GemFire Reference Guide and Javadoc.
Don't forget to familiarize yourself with Pivotal GemFire as well! Javadoc here.
Hope this helps!
-John

When is Spring + Tomcat not powerful enough?

I've been reading/learning more about Spring lately, and how one would use Spring in combination with other open-source tools like Tomcat and Hibernate. I'm evaluating whether or not Spring MVC could be a possible replacement technology for the project I work on, which uses WebLogic and a LOT of custom-rolled Java EE code. The thing is, I've always suspected that our solution is over-engineered and WAY more complex than it needs to be. Amazingly, it's 2009, and yet, we're writing our own transaction-handling and thread-pooling classes. And it's not like we're Amazon, eBay, or Google, if you know what I mean. Thus, I'm investigating a "simpler is better" option.
So here's my question: I'd like to hear opinions on how you make the decision that a full-blown Java EE application server is necessary, or not. How do you "measure" the size/load/demand on a Java EE app? Number of concurrent users? Total daily transactions? How "heavy" does an app need to get before you throw up your hands in surrender and say, "OK, Tomcat just isn't cutting it, we need JBoss/WebLogic/WebSphere"?
I don't think that the decision to use a full-fledged Java EE server or not should be based on number of users or transactions. Rather it should be based on whether you need the functionality.
In my current project we're actually moving away from JBoss to vanilla Tomcat because we realized we weren't using any of the Java EE functionality beyond basic servlets anyway. We are, however, using Spring. Between Spring's basic object management, transaction handling and JDBC capabilities, we aren't seeing a compelling need for EJB. We currently use Struts 2 rather than Spring's MVC, but I've heard great things about that. At any rate, Spring integrates well with a number of Java web frameworks.
Spring does not attempt to replace certain advanced parts of the JavaEE spec, such as JMS and JTA. Instead, it builds on those, making them consistent with the "Spring way", and generally making them easier to use.
If your application requires the power of the likes of JMS and JTA, then you can easily use them via Spring. Not a problem with that.
Google open sources a lot of their code. If you're writing low-level things yourself, instead of implementing code that's already written, you're often overthinking the problem.
Back to the actual question, Walmart.com, etrade.com, The Weather Channel and quite a few others just use Tomcat. Marketing and sales guys from IBM would have you believe different, perhaps, but there's no upper limit on Tomcat.
Except for EJB, I'm not sure what Tomcat is missing, and I'm not a fan of EJB.
What tomcat does not offer apart from the more exotic elements of Java EE is session beans (aka EJBs). Session beans allow you to isolate your processing efficiently. So you could have one box for the front end, another for the session beans (business logic) and another for the database.
You would want to do this for at least 2 reasons:
Performance; You're finding that one box to handle everything is loading the box too much. Separating the different layers onto different boxes would allow you to scale out. Session beans are also able to load balance at a more fine grained level. Tomcat and other web services of that ilk don't have clustering out of the box.
Flexibility; Now that you've moved your business logic into its own environment you could develop an alternate front end which used the same layer, but say, was a thick client front end for example. Or maybe other contexts would like to make use of the session beans.
Though I should probably point out that if you use web services to communicate with that middle tier, it could also be on tomcat!
The only reason to use a full blown Java EE server is if you need distributed XA transactions, if you don't need XA transactions then you can use Spring + JPA + Tomcat + Bean Validation + JSTL + EL + JSP + Java Mail.
Also a Java EE server is supposed to implement JMS but it does not make sense to run the JMS server in the same VM as the rest of the app server so if you need JMS you should have a separate JMS server.
I strongly disagree with all answers given here.
Everything can be added to Tomcat, including EJB, CDI, JTA, Bean Validation, JAX-RS, etc.
The question is: do you want this? Do you want to assemble all those dependencies in the right versions and test that it all works together, when others have already done this?
Let's be clear: nobody uses only Tomcat! Everyone always adds a web framework, an ioc container, an orm, a transaction manager, web services, etc etc
Lightweight Java EE servers like TomEE already include all of that and makes the full stack experience of having all those things integrated so much better.
Maybe this can be of interest:
http://onjava.com/onjava/2006/02/08/j2ee-without-application-server.html
HTH

Resources