Activiti vs Spring batch - spring

I have got a use case to implement. It's basically a workflow kind of use case. Below is the requirements
Extract and import data from an external db to an internal db
Make this imported data into different formats and supply it to multiple external systems and invoke some script there. The external interfaces are SFTP, SOAP, JDBC, Python over CORBA. There are around 14 external systems with one of these interfaces.
Interface transactions are executed in around 15 steps, with the ability to run some steps in parallel
These steps should be configurable. ie, a particular flow may execute 10 of these 15 steps and another flow executes 15 of 15 steps
Should have the ability to restart each step individually or restart from a particular step
There are some steps that are manual and completion of manual step should trigger next step
Volume of data is not that large. Total data size is around 400k records. But this process is executing for around 30k records at a time. Time for development is less and we are looking for some light weight easy to learn and implement solution.
We are looking for Spring based or Spring integratable solutions.
The solutions we considered are
For workflow:
Activiti, Spring Batch
For interfaces:
Spring Integration
My question is
Can Spring batch considered for managing a work flow kind of use case? I don't think it's a best fit use case for Spring Batch but as its simple and easy to implement looked for its scope. We considered doing the interfaces interaction as each step in a batch job and inside the tasklet do the Spring Integration for external interfaces, with few issues as far as I understand are
a) Dynamic step configuration can be done with Java configuration, but how flexible it is and is it recommended?
b) Manual step processing is not possible in Spring Batch
Is there any work around for this? Is there any other issues or performance impacts on doing this?
Activiti seems to a solution. Can you please provide some feedback on Activiti with Spring and Spring integration for this use case and ease of implementing it? And support for Activiti
Can Activiti workflows restarted from a particular task? Is a task can be rollbacked?
Welcoming any suggestions !!

1) For managing workflows, Activiti would be a great choice. They have created a really good process engine which should comply your needs for delegating your tasks as well as calling your custom logic. Moreover, it is based entirely on Spring Framework so Integration with your logic would be easy.
2) i've provided the same in first answer.
3) No, you will have to create a new workflow for that and Yes!, a task can be rolled back.

Related

Spring Batch disable Spring Boot AutoConfiguration for specific jobs

I have multiple jobs for my Spring Batch application, but only a single job uses some specific Spring Boot auto configuration features:
a job that uses spring-data-jpa auto configuration, to configure a database for business transactions (not for Spring Batch management)
a job that does not use the database at all
I have packaged both jobs in the same unit because it makes sense from business perspective. Both jobs will work together and the output of one job will be the input of the other job.
Is it possible to disable database specific auto configuration when I run the second job?
I just tried using profiles and I have disabled the autoconfiguration for a specific profile. I am pretty happy with this solution but I wonder if there are other solutions?
This is similar to trying to lazy load beans specific to a given job: How to apply something like #Lazy to Spring Batch?. While the Spring profiles feature may fix your issue, I believe it is working around the root issue which is packaging all jobs in a monolithic way.
I would package each job separately and this problem (and many others) disappears by design. There are several advantages to this approach:
Independent lifecyle management (bugs, features, etc)
Flexible deployment
Separate logs
Separate configurations (as in the current issue)
Easier/Better scalability
And all the good reasons to make one thing do one thing and do it well.

ETL in Java Spring Batch vs Apache Spark Benchmarking

I have been working with Apache Spark + Scala for over 5 years now (Academic and Professional experiences). I always found Spark/Scala to be one of the robust combos for building any kind of Batch or Streaming ETL/ ELT applications.
But lately, my client decided to use Java Spring Batch for 2 of our major pipelines :
Read from MongoDB --> Business Logic --> Write to JSON File (~ 2GB | 600k Rows)
Read from Cassandra --> Business Logic --> Write JSON File (~ 4GB | 2M Rows)
I was pretty baffled by this enterprise-level decision. I agree there are greater minds than mine in the industry but I was unable to comprehend the need of making this move.
My Questions here are:
Has anybody compared the performances between Apache Spark and Java Spring Batch?
What could be the advantages of using Spring Batch over Spark?
Is Spring Batch "truly distributed" when compared to Apache Spark? I came across methods like chunk(), partition etc in offcial docs but I was not convinced of its true distributedness. After all Spring Batch is running on a single JVM instance. Isn't it ???
I'm unable to wrap my head around these. So, I want to use this platform for an open discussion between Spring Batch and Apache Spark.
As the lead of the Spring Batch project, I’m sure you’ll understand I have a specific perspective. However, before beginning, I should call out that the frameworks we are talking about were designed for two very different use cases. Spring Batch was designed to handle traditional, enterprise batch processing on the JVM. It was designed to apply well understood patterns that are common place in enterprise batch processing and make them convenient in a framework for the JVM. Spark, on the other hand, was designed for big data and machine learning use cases. Those use cases have different patterns, challenges, and goals than a traditional enterprise batch system, and that is reflected in the design of the framework. That being said, here are my answers to your specific questions.
Has anybody compared the performances between Apache Spark and Java Spring Batch?
No one can really answer this question for you. Performance benchmarks are a very specific thing. Use cases matter. Hardware matters. I encourage you to do your own benchmarks and performance profiling to determine what works best for your use cases in your deployment topologies.
What could be the advantages of using Spring Batch over Spark?
Programming model similar to other enterprise workloads
Enterprises need to be aware of the resources they have on hand when making architectural decisions. Is using new technology X worth the retraining or hiring overhead of technology Y? In the case of Spark vs Spring Batch, the ramp up for an existing Spring developer on Spring Batch is very minimal. I can take any developer that is comfortable with Spring and make them fully productive with Spring Batch very quickly. Spark has a steeper learning curve for the average enterprise developer, not only because of the overhead of learning the Spark framework but all the related technologies to prodictionalize a Spark job in that ecosystem (HDFS, Oozie, etc).
No dedicated infrastructure required
When running in a distributed environment, you need to configure a cluster using YARN, Mesos, or Spark’s own clustering installation (there is an experimental Kubernetes option available at the time of this writing, but, as noted, it is labeled as experimental). This requires dedicated infrastructure for specific use cases. Spring Batch can be deployed on any infrastructure. You can execute it via Spring Boot with executable JAR files, you can deploy it into servlet containers or application servers, and you can run Spring Batch jobs via YARN or any cloud provider. Moreover, if you use Spring Boot’s executable JAR concept, there is nothing to setup in advance, even if running a distributed application on the same cloud-based infrastructure you run your other workloads on.
More out of the box readers/writers simplify job creation
The Spark ecosystem is focused around big data use cases. Because of that, the components it provides out of the box for reading and writing are focused on those use cases. Things like different serialization options for reading files commonly used in big data use cases are handled natively. However, processing things like chunks of records within a transaction are not.
Spring Batch, on the other hand, provides a complete suite of components for declarative input and output. Reading and writing flat files, XML files, from databases, from NoSQL stores, from messaging queues, writing emails...the list goes on. Spring Batch provices all of those out of the box.
Spark was built for big data...not all use cases are big data use cases
In short, Spark’s features are specific for the domain it was built for: big data and machine learning. Things like transaction management (or transactions at all) do not exist in Spark. The idea of rolling back when an error occurs doesn’t exist (to my knowledge) without custom code. More robust error handling use cases like skip/retry are not provided at the level of the framework. State management for things like restarting is much heavier in Spark than Spring Batch (persisting the entire RDD vs storing trivial state for specific components). All of these features are native features of Spring Batch.
Is Spring Batch “truly distributed”
One of the advantages of Spring Batch is the ability to evolve a batch process from a simple sequentially executed, single JVM process to a fully distributed, clustered solution with minimal changes. Spring Batch supports two main distributed modes:
Remote Partitioning - Here Spring Batch runs in a master/worker configuration. The masters delegate work to workers based on the mechanism of orchestration (many options here). Full restartability, error handling, etc. is all available for this approach with minimal network overhead (transmission of metadata describing each partition only) to the remote JVMs. Spring Cloud Task also provides extensions to Spring Batch that allow for cloud native mechanisms to dynamically deploying the workers.
Remote Chunking - Remote chunking delegates only the processing and writing phases of a step to a remote JVM. Still using a master/worker configuration, the master is responsible for providing the data to the workers for processing and writing. In this topology, the data travels over the wire, causing a heavier network load. It is typically used only when the processing advantages can surpass the overhead of the added network traffic.
There are other Stackoverflow answers that discuss these features in further detail (as does as the documentation):
Advantages of spring batch
Difference between spring batch remote chunking and remote partitioning
Spring Batch Documentation

What modules to use for a synchronization service in Java/Spring?

I'm willing to build a synchronization service in Java. The use case is, that i'm fetching data from an exchange-service (via Exchange Web Services), normalize the data a bit (process probably) and then write it to a backend via GraphQL. I already had a look around the spring modules, but am not quite sure what modules to use. I found spring batch and spring quartz.
The synchronization will have to trigger all X seconds, fetch information from the Exchange, look what's in the backend already and update what's needed.
Do you guys have any suggestions? I started implementing this whole thing in nodejs before, but as it has to run on both, Windows Servers and Docker/Linux, it has been a real pain to keep it running smooth (mostly because bundling nodejs to an application for Windows is pain).
Difference between Spring Batch & Quartz:
Spring Batch and Quartz have different goals. Spring Batch provides functionality for processing large volumes of data and Quartz provides functionality for scheduling tasks.
So Quartz could complement Spring Batch, A common combination would be to use Quartz as a trigger for a Spring Batch job using a Cron expression.
Conclusion : So basically Spring Batch defines what should be done, Quartz defines when it should be done.
Quartz is a scheduling framework. Like "execute something every hour or every last friday of the month"
Spring Batch is a framework that defines that "something" that will be executed.
You can define a job, that consists of steps. Usually a step is something that consists of item reader, optional item processor and item writer, but you can define a custom stem. You can also tell Spring batch to commit on every 10 items and a lot of other stuff.
You can use Quartz to start Spring Batch jobs.
Recommended for your use case :
Quartz scheduling as you want trigger after specific interval.
Reference :https://projects.spring.io/spring-batch/faq.html

Spring batch or Spring boot async method execution?

I have a situation where the data is to be read from 4 different web services, process it and then store the results in a database table. Also send a notification after this task is complete. The trigger for this process is through a web service call.
Should I write my job as a spring batch job or write the whole read/process code as an async method (using #Async) which is called from the Rest Controller?
Kindly suggest
In my opinion the your choice should be #Async, because Spring Batch was designed for large data processing and it isn't thought to processing on demand, typically you create a your batch and then launch the batch with a schedule. The benefit of this kind of architetture will be the reliability of your job that colud restarted in case of fail and so on. In your case you have a data integration problem and I can suggest to see at Spring Integration. You could have a Spring Integration pipeline that you start through a rest call.
I hope that this can help you
If there are large amounts of services should be executed, spring-batch would be the choice. Otherwise, I guess there is no need to import spring-batch.
In my opinion, #Async annotation is the easier way.
If both methods can work, of course simpler the better.
At the end, if there will be more and more service not only 4, spring-batch would be the better solution, cause spring-batch are professional in this.

Spring Batch for File Processing

Is Spring Batch a good fit for processing a a large number of individual files?
Spring Batch seems to be geared towards data-centric jobs. I've got a requirement to pull down several million files from an S3 bucket, unzip them, perform some logic based on the contents, then call a web service.
Implementing this by hand is trivial, but I don't much fancy re-inventing the wheel when it comes to tracking job executions, and how far a job got along before it failed. Spring Batch seems to be an ideal fit for this job-monitoring, but I'm not sure whether subverting it to do file processing is a step too far.
Short answer is Yes, you can use spring batch for this. I had done a small POC where we had to migrate millions of images from source system to target system in a batch process and it works well IMHO.
Adding on to comment by #Prasanna Talakanti, I would suggest to use a combination of Spring Integration and Spring Batch. While Spring batch will provide you infrastructure for batch processing (Commit at intervals, restart job if failed etc), Spring integration will provide you things around web service gateways.
In Spring batch, you can define reader for reading data from S3 and writer for writing to your destination with processor in between if needed. You could also fine tune the commit interval so if the job fails in between, you have a point of rollback.

Resources