blocking consumers in apache camel using existing components - jdbc

I would like to use the Apache Camel JDBC component to read an Oracle table. I want Camel to run in a distributed environment to meet availability concerns. However, the table I am reading is similar to a queue, so I only want to have a single reader at any given time so I can avoid locking issues (messy in Oracle).
If the reader goes down, I want another reader to take over.
How would you accomplish this using the out-of-the-box Camel components? Is it possible?

It depends on your deployment architecture. For example, if you deploy your Camel apps on Servicemix (or ActiveMQ) in a master/slave configuration (for HA), then only one consumer will be active at a given time...
But, if you need multiple running (clustered for scalability), then (by default) they will compete/duplicate reads from the table unless you write your own locking logic.
This is easy using Hazelcast Distributed Locking. There is a camel-hazelcast component, but it doesn't support the lock API. Once you configure your apps to participate in a Hazelcast cluster, then just just the lock API around any code that you need to synchronize for a given object...
import com.hazelcast.core.Hazelcast;
import java.util.concurrent.locks.Lock;
Lock lock = Hazelcast.getLock(myLockedObject);
lock.lock();
try {
// do something here
} finally {
lock.unlock();
}

Related

Using multiple file suppliers in Spring Cloud Stream

I have an application with file-supplier Srping Cloud module included. My workflow is like track file creating/modifying in particular directory->push events to Kafka topic if there are such events. FileSupplierConfiguration is used to configure this file supplier. But now I have to track one more directory and push events to another relevant Kafka topic. So, there is an issue, because there is no possibility to include multiple FileSupplierConfiguration in project for configuration another file supplier. I remember that one of the main principles of microservices for which spring-cloud-stream was designed for is you do one thing and do it well without affecting others, but it still the same microservice with same tracking/pushing functionality but for another directory and topic. Is there any possibility to add one more file supplier with relevant configuration with file-supplier module? Or is the best solution for this issue to run one more application instance with another configuration?
Yes, the best way is to have another instance of this application, but with its own specific configuration properties. This is really how these functions have been designed: microservice based on the convention on configuration. What you are asking really contradicts with Spring Boot expectations. Imaging you'd need to connect to several data bases. So, you only can have a single JdbcTemplate auto-configured. The rest is only possible manually. But better to have the same code base which relies on the auto-configuration and may apply different props.

Spring Batch and Apache Kafka

I am in learning phase of Kafka.I came across this video This Video
This confuses me alot. I am able to understand kafka consumer and producer and i can see lot of reference materials related to that. We have batch listeners already there so why we need spring batch support here .Is there any specific advantage of using spring kafka batch over using normal batch listeners? Please help me in understanding as i can't see any reference materials comparing both.What i felt that we have more freedom and customisations using normal consumer and producer.Please correct me if i am wrong.
Spring Batch is a batch processing framework (fixed data sets) while Kafka is a streaming platform (infinite data streams). Those tools address two different types of requirements and use cases.
However, there are many cases where you want to have a "bridge" between these two worlds. Here are a couple examples:
Replay a stream of events to create an application state up to a certain time: Here you can use a Spring Batch job that reads a Kafka topic from the beginning and replays all events (The KafkaItemReader can be helpful here)
Inject a set of events from a file or a database table in a live stream. The KafkaItemWriter can be used in this case.
etc
The advantage of using a Spring Batch job over a regular batch listener is all what Spring Batch offers in terms of transaction management, state management for restartability, fault-tolerance features, etc.

Use eventstore with Axon Framework 3 and Spring Boot

I'm trying to realize a simple distributed applications, and I would like to save all the events into the event store.
For this reason, as suggested in the "documentation" of Axon here, I would like to use Mysql as event store.
Since I haven't very much experience with Spring, I cannot understand how to getting it working.
I would have two separate services one for the command side and one for the query side. Since I'm planning to have more services, I would like to know how to configure them to use an external event store (not stored inside of any of these services).
For the distribution of the commands and events, I'm using RabbitMQ:
#Bean
public org.springframework.amqp.core.Exchange exchange() {
return ExchangeBuilder.fanoutExchange("AxonEvents").build();
}
#Bean
public Queue queue() {
return QueueBuilder.durable("AxonEvents").build();
}
#Bean
public Binding binding() {
return BindingBuilder.bind(queue()).to(exchange()).with("*").noargs();
}
#Autowired
public void configure(AmqpAdmin admin)
{
admin.declareExchange(exchange());
admin.declareQueue(queue());
admin.declareBinding(binding());
}
This creates the required queue on a local running RabbitMQ instance (with default username and password).
My question is: How can I configure Axon to use mysql as an event store?
As the Reference Guide currently does not specify this, I am gonna point this out here.
Currently you've roughly got two approaches you follow when distributing an Axon application or separating an Axon application into (micro) services:
Use an full open source approach
Use AxonHub / AxonDb
Taking approach 2, which you can do in a developer environment, you would only have to run AxonHub and AxonDb and configure them to your application.
That's it, your done; you can scale out your application and all the messages are routed as desired.
If you want to take route 1 however, you will have to provide several configurations
Firstly, you state you use RabbitMQ to route commands and events.
In fact, the framework does not simply allow using RabbitMQ to route commands at all. Do note it is a solution to distribute EventMessages, just not CommandMessages.
I suggest either using JGroups or Spring Cloud to route your commands in a open-source scenario (I have added links to the Reference Guide pages regarding distributing the CommandBus for JGroups and Spring Cloud).
To distribute your events, you can take three approaches:
Use a shared database for your events.
Use AMQP to send your evens to different instances.
Use Kafka to send your evens to different instances.
My personal preference when starting an application though, is to begin with one monolith and separate when necessary.
I think the term 'Evolutionary Micro Services' catches this nicely.
Any how, if you use the messaging paradigm supported by Axon to it's fullest, splitting out the Command side from the Query side after wards should be quite simple.
If you'd in addition use the AxonHub to distribute your messages, then you are practically done.
Concluding though, I did not find a very exact request from your issues.
Does this give you the required information to proceed, #Federico Ponzi?
Update
After having given it some thought, I think your solution is quite simple.
You are using Spring Boot and you want to set up your EventStore to use MySQL. For Axon to set the right EventStorageEngine (the infra component used under the covers to read/write events), you can simply add a dependency on the spring-boot-starter-data-jpa.
Axon it's auto configuration will in that scenario automatically notice that you have Spring Data JPA on your classpath, and as such will set the JpaEventStorageEngine.

Database event listener using spring boot

I need to attach a listener to a table in db
which should call a spring boot method, once CRUD operation is performed in the table(pre listeners and post listeners)
the entry can be made from any source
how can i do that in spring boot?
If the entity can be created from any source - e.g. manual insert - this is something which is outside of the scope and context of your running application.
What you're describing is known as the CDC (change data capture) pattern.
To implement CDC in this case you need to use the instrumentation of the underlying database - for example triggers.
As I see this is tagged with MongoDb - triggers are not an option as mongodb doesn't have support for triggers.
If you are using MongoDb v3.6+ you can leverage the new Change Streams feature. This is the official example with Java.
Change streams allow applications to access real-time data changes
without the complexity and risk of tailing the oplog. Applications can
use change streams to subscribe to all data changes on a single
collection, a database, or an entire deployment, and immediately react
to them. Because change streams use the aggregation framework,
applications can also filter for specific changes or transform the
notifications at will.
If you are using earlier versions of MongoDb you can monitor the oplog or use tailable cursors with capped collections.
Another approach would be to look into a 3rd party solution that turns everything happening in the DB as event streams - like for example debezium.
This article explains how to call any program from DB-Trigger.
Therefore, you can just create a Spring Boot java app and make the sys call to your app.
Similar mechanism is also available in Oracle and other DB.

Apache camel read directory and get file contents

I would like to read directory with specific file path and get the file contents with apache camel and spring boot. I have router and processor class in Java. There are not many resources on the Internet, but only on the official website of apache camel. Thank you in advance.
One option is to use the Apache Camel File component to consume files. One thing to keep in mind however is that if you're deploying in a clustered environment, extra precautions need to be taken to avoid competing consumer issue. From the documentation:
Warning: most of the read lock strategies are not suitable for use in clustered mode. That is, you cannot have multiple consumers attempting to read the same file in the same directory. In this case, the read locks will not function reliably. The idempotent read lock supports clustered reliably if you use a cluster aware idempotent repository implementation such as from Hazelcast Component or Infinispan.
Because of this and other complexities, I typically avoid using the Camel file component for consuming files and just use java.nio.file.Files API in a bean/processor as it is more straightforward and provides easier mechanisms for dealing with this and other limitations.

Resources