Spring Batch and Apache Kafka - spring-boot

I am in learning phase of Kafka.I came across this video This Video
This confuses me alot. I am able to understand kafka consumer and producer and i can see lot of reference materials related to that. We have batch listeners already there so why we need spring batch support here .Is there any specific advantage of using spring kafka batch over using normal batch listeners? Please help me in understanding as i can't see any reference materials comparing both.What i felt that we have more freedom and customisations using normal consumer and producer.Please correct me if i am wrong.

Spring Batch is a batch processing framework (fixed data sets) while Kafka is a streaming platform (infinite data streams). Those tools address two different types of requirements and use cases.
However, there are many cases where you want to have a "bridge" between these two worlds. Here are a couple examples:
Replay a stream of events to create an application state up to a certain time: Here you can use a Spring Batch job that reads a Kafka topic from the beginning and replays all events (The KafkaItemReader can be helpful here)
Inject a set of events from a file or a database table in a live stream. The KafkaItemWriter can be used in this case.
etc
The advantage of using a Spring Batch job over a regular batch listener is all what Spring Batch offers in terms of transaction management, state management for restartability, fault-tolerance features, etc.

Related

ETL in Java Spring Batch vs Apache Spark Benchmarking

I have been working with Apache Spark + Scala for over 5 years now (Academic and Professional experiences). I always found Spark/Scala to be one of the robust combos for building any kind of Batch or Streaming ETL/ ELT applications.
But lately, my client decided to use Java Spring Batch for 2 of our major pipelines :
Read from MongoDB --> Business Logic --> Write to JSON File (~ 2GB | 600k Rows)
Read from Cassandra --> Business Logic --> Write JSON File (~ 4GB | 2M Rows)
I was pretty baffled by this enterprise-level decision. I agree there are greater minds than mine in the industry but I was unable to comprehend the need of making this move.
My Questions here are:
Has anybody compared the performances between Apache Spark and Java Spring Batch?
What could be the advantages of using Spring Batch over Spark?
Is Spring Batch "truly distributed" when compared to Apache Spark? I came across methods like chunk(), partition etc in offcial docs but I was not convinced of its true distributedness. After all Spring Batch is running on a single JVM instance. Isn't it ???
I'm unable to wrap my head around these. So, I want to use this platform for an open discussion between Spring Batch and Apache Spark.
As the lead of the Spring Batch project, I’m sure you’ll understand I have a specific perspective. However, before beginning, I should call out that the frameworks we are talking about were designed for two very different use cases. Spring Batch was designed to handle traditional, enterprise batch processing on the JVM. It was designed to apply well understood patterns that are common place in enterprise batch processing and make them convenient in a framework for the JVM. Spark, on the other hand, was designed for big data and machine learning use cases. Those use cases have different patterns, challenges, and goals than a traditional enterprise batch system, and that is reflected in the design of the framework. That being said, here are my answers to your specific questions.
Has anybody compared the performances between Apache Spark and Java Spring Batch?
No one can really answer this question for you. Performance benchmarks are a very specific thing. Use cases matter. Hardware matters. I encourage you to do your own benchmarks and performance profiling to determine what works best for your use cases in your deployment topologies.
What could be the advantages of using Spring Batch over Spark?
Programming model similar to other enterprise workloads
Enterprises need to be aware of the resources they have on hand when making architectural decisions. Is using new technology X worth the retraining or hiring overhead of technology Y? In the case of Spark vs Spring Batch, the ramp up for an existing Spring developer on Spring Batch is very minimal. I can take any developer that is comfortable with Spring and make them fully productive with Spring Batch very quickly. Spark has a steeper learning curve for the average enterprise developer, not only because of the overhead of learning the Spark framework but all the related technologies to prodictionalize a Spark job in that ecosystem (HDFS, Oozie, etc).
No dedicated infrastructure required
When running in a distributed environment, you need to configure a cluster using YARN, Mesos, or Spark’s own clustering installation (there is an experimental Kubernetes option available at the time of this writing, but, as noted, it is labeled as experimental). This requires dedicated infrastructure for specific use cases. Spring Batch can be deployed on any infrastructure. You can execute it via Spring Boot with executable JAR files, you can deploy it into servlet containers or application servers, and you can run Spring Batch jobs via YARN or any cloud provider. Moreover, if you use Spring Boot’s executable JAR concept, there is nothing to setup in advance, even if running a distributed application on the same cloud-based infrastructure you run your other workloads on.
More out of the box readers/writers simplify job creation
The Spark ecosystem is focused around big data use cases. Because of that, the components it provides out of the box for reading and writing are focused on those use cases. Things like different serialization options for reading files commonly used in big data use cases are handled natively. However, processing things like chunks of records within a transaction are not.
Spring Batch, on the other hand, provides a complete suite of components for declarative input and output. Reading and writing flat files, XML files, from databases, from NoSQL stores, from messaging queues, writing emails...the list goes on. Spring Batch provices all of those out of the box.
Spark was built for big data...not all use cases are big data use cases
In short, Spark’s features are specific for the domain it was built for: big data and machine learning. Things like transaction management (or transactions at all) do not exist in Spark. The idea of rolling back when an error occurs doesn’t exist (to my knowledge) without custom code. More robust error handling use cases like skip/retry are not provided at the level of the framework. State management for things like restarting is much heavier in Spark than Spring Batch (persisting the entire RDD vs storing trivial state for specific components). All of these features are native features of Spring Batch.
Is Spring Batch “truly distributed”
One of the advantages of Spring Batch is the ability to evolve a batch process from a simple sequentially executed, single JVM process to a fully distributed, clustered solution with minimal changes. Spring Batch supports two main distributed modes:
Remote Partitioning - Here Spring Batch runs in a master/worker configuration. The masters delegate work to workers based on the mechanism of orchestration (many options here). Full restartability, error handling, etc. is all available for this approach with minimal network overhead (transmission of metadata describing each partition only) to the remote JVMs. Spring Cloud Task also provides extensions to Spring Batch that allow for cloud native mechanisms to dynamically deploying the workers.
Remote Chunking - Remote chunking delegates only the processing and writing phases of a step to a remote JVM. Still using a master/worker configuration, the master is responsible for providing the data to the workers for processing and writing. In this topology, the data travels over the wire, causing a heavier network load. It is typically used only when the processing advantages can surpass the overhead of the added network traffic.
There are other Stackoverflow answers that discuss these features in further detail (as does as the documentation):
Advantages of spring batch
Difference between spring batch remote chunking and remote partitioning
Spring Batch Documentation

Spring batch or Spring core libraries for building file operation process

I'm dipping my toes into the microservices, is spring boot batch applicable to the following requirements?
Files of one or multiple are read from a specific directory in Linux.
Several operations like regex, build new files, write the file and ftp to a location
Send email during a process fail
Using spring boot is confirmed, now the question is
Should I use spring batch or just core spring framework?
I need to integrate with Control-M to trigger the job. Can the Control-M be completely removed by using Spring batch library? As we don't know when to expect the files in the directory.
I've not seen a POC with these requirements. Would someone provide an example POC or an affirmation this could be achieved with Spring batch?
I would use Spring Batch for that use case. Not only does it provide out of the box components for reading, processing, and writing files, it adds a lot more for error handling, scalability, etc. All of those things you'd probably end up wiring up by yourself if you go without Spring Batch.
As for being launched via Control-M, yes MANY large customers use Control-M to launch their jobs. Unfortunately, I've never done it myself so I cannot provide any details on the mechanics, but if Control-M can either launch a script or call a REST API, you can launch a job with it.
I would suggest you, go for spring batch as it has much-inbuilt functionality which will be provided to you for file reading and writing to your required location. Even you will be able to handle record skipping requirement. Your mail triggering requirement will be handled by Control M. You just need to decide one exit code for your handled exception and on the basis of that exit code you can trigger the mail to respective members. And there are many other features which will be helpful if you go for spring batch.

What modules to use for a synchronization service in Java/Spring?

I'm willing to build a synchronization service in Java. The use case is, that i'm fetching data from an exchange-service (via Exchange Web Services), normalize the data a bit (process probably) and then write it to a backend via GraphQL. I already had a look around the spring modules, but am not quite sure what modules to use. I found spring batch and spring quartz.
The synchronization will have to trigger all X seconds, fetch information from the Exchange, look what's in the backend already and update what's needed.
Do you guys have any suggestions? I started implementing this whole thing in nodejs before, but as it has to run on both, Windows Servers and Docker/Linux, it has been a real pain to keep it running smooth (mostly because bundling nodejs to an application for Windows is pain).
Difference between Spring Batch & Quartz:
Spring Batch and Quartz have different goals. Spring Batch provides functionality for processing large volumes of data and Quartz provides functionality for scheduling tasks.
So Quartz could complement Spring Batch, A common combination would be to use Quartz as a trigger for a Spring Batch job using a Cron expression.
Conclusion : So basically Spring Batch defines what should be done, Quartz defines when it should be done.
Quartz is a scheduling framework. Like "execute something every hour or every last friday of the month"
Spring Batch is a framework that defines that "something" that will be executed.
You can define a job, that consists of steps. Usually a step is something that consists of item reader, optional item processor and item writer, but you can define a custom stem. You can also tell Spring batch to commit on every 10 items and a lot of other stuff.
You can use Quartz to start Spring Batch jobs.
Recommended for your use case :
Quartz scheduling as you want trigger after specific interval.
Reference :https://projects.spring.io/spring-batch/faq.html

How to read and process multiple files concurrently in spring?

I am new to Spring framework and I am doing one simple project using spring and got stuck in between.
In my project I am reading the file from directory using spring poller. And then processing that file through various channels and sending it to the queue. But problem is that "file-inbound-channel-adapter" (which I'm using ) is reading only one file at a time.
So I need a solution which will read and process multiple files at a time.
Is there any way to implement multithreading in spring integration.
Thank you.
Add a task-executor to the poller; see the documentation.
You can control the concurrency with max-messages-per-poll and the task executor's pool size. See the complete poller configuration details for more information.

Spring Batch for File Processing

Is Spring Batch a good fit for processing a a large number of individual files?
Spring Batch seems to be geared towards data-centric jobs. I've got a requirement to pull down several million files from an S3 bucket, unzip them, perform some logic based on the contents, then call a web service.
Implementing this by hand is trivial, but I don't much fancy re-inventing the wheel when it comes to tracking job executions, and how far a job got along before it failed. Spring Batch seems to be an ideal fit for this job-monitoring, but I'm not sure whether subverting it to do file processing is a step too far.
Short answer is Yes, you can use spring batch for this. I had done a small POC where we had to migrate millions of images from source system to target system in a batch process and it works well IMHO.
Adding on to comment by #Prasanna Talakanti, I would suggest to use a combination of Spring Integration and Spring Batch. While Spring batch will provide you infrastructure for batch processing (Commit at intervals, restart job if failed etc), Spring integration will provide you things around web service gateways.
In Spring batch, you can define reader for reading data from S3 and writer for writing to your destination with processor in between if needed. You could also fine tune the commit interval so if the job fails in between, you have a point of rollback.

Resources