Is there any custom mysql input event adapter for wso2 cep - wso2-cep

I wanted to have the event recevier/stream from DB instead of jms,email,http.
In wso2 cep mysql/db is available as only output adapter not as intput adapter.
So is there any custom mysql input adapter available.
Please let me know if there is any alternative solution for db adapter.

I am afraid that WSO2 CEP does not have an Input Event Adapter to recieve events from a database (You can find the list of available input event adapters in the product documentation).
To my understanding, this is mainly because WSO2 CEP is designed for realtime stream processing.
I guess, here the database is not the original source which generates the events?
If so, there should be another event publisher which writes to the database. If you have the control over this publisher, would n't it be possible to get that publisher to send events to the WSO2CEP server directly, rather than writing to the database and then reading from it?
In my opinion, that will be a better solution compared to reading from the database.

Related

Read data directly from REST Api with Kafka

Is such a situation even possible ? :
There is an application "XYZ" (in which there is no Kafka) that exposes a REST api. It is a SpringBoot application with which Angular application communicates.
A new application (SpringBoot) is created which wants to use Kafka and needs to fetch data from "XYZ" application. And it wants to do this using Kafka.
The "XYZ" application has an example endpoint [GET] api/message/all which displays all messages.
Is there a way to "connect" Kafka directly to this endpoint and read data from it ? In short, the idea is for Kafka to consume data directly from the EP. Communication between two microservices, where one microservice does not have a kafka.
What suggestions do you have for solving this situation. Because I guess this option is not possible. Is it necessary to add a publisher in application XYZ which will send data to the queue and only then will they be available for consumption by a new application ??
Getting them via the REST-Interface might not be a very good idea.
Simply put, in the messaging world, message delivery guarantees are a big topic and the standard ways to solve that with Kafka are usually
Producing messages from your service using the Producer-API to a Kafka topic.
Using Kafka-Connect to read from an outbox-table.
Since you most likely have a database already attached to your API-Service, there might arise the problem of dual writes if you choose to produce the messages directly to a topic. What this means, is that writes to a database might fail while it might be successfully written to Kafka/vice-versa. So you can end up with inconsistent states. Depending on your use case this might be a problem or not.
Nevertheless, to overcome that, the outbox pattern can come in handy.
Via the outbox pattern, you'd basically write your messages to a table, a so-called outbox-table, and then you'd use Kafka-Connect to poll this table of the database. Kafka Connect is basically a cluster of workers that consume this database table and forward the entries of the table to a Kafka topic. You might want to look at confluent cloud, they offer a fully managed Kafka-Connect service. Like this you don't have to manage the cluster of workers yourself. Once you have the messages in a Kafka topic, you can consume them with the standard Kafka Consumer-API/ Stream-API.
What you're looking for is a Source-Connector.
A source connector for a specific database. E.g. MongoDB
E.g. https://www.confluent.io/hub/mongodb/kafka-connect-mongodb
For now, most source-connectors produce in an at-least-once fashion. This means that the topic you configure the connector to write to might contain a message twice. So make sure that if you need them to be consumed exactly once, you think about deduplicating these messages.

Spring boot applications high availability

We have a microservice which is developed using spring boot. couple of the functionalities it implements is
1) A scheduler that triggers, at a specified time, a file download using webhdfs and process it and once the data is processed, it will send an email to users with the data process summary.
2) Read messages from kafka and once the data is read, send an email to users.
We are now planning to make this application high available either in Active-Active or Active-passive set up. The problem we are facing now is if both the instances of the application are running then both of them will try to download the file/read the data from kafka, process it and send emails. How can this be avoided? I mean to ensure that only one instance triggers the download and process it ?
Please let me know if there is known solution for this kind of scenarios as this seems to be a common scenario in most of the projects? Is master-slave/leader election approach a correct solution?
Thanks
Let the service download that file, extract the information and publish them via kafka.
Check beforehand if the information was already processed by querying kafka or a local DB.
You also could publish an DataProcessed-Event that triggers the EmailService, that sends the corresponding E-Mail.

how to read MQTT mosquitto server persisted DB file

I am using mosquitto server for MQTT protocol.
Using persistence setting in a configuration file with -c option, I am able to save the data.
However the file generated is binary one.
How would one be able to read that file?
Is there any specific tool available for that?
Appreciate your views.
Thanks!
Amit
Why do you want to read it?
The data is only kept there while messages (QOS1 or QOS2) are in flight to ensure they are not lost in transit while waiting for a response from the subscribed client.
Data may also be kept for clients that are disconnected but have persistent subscriptions (cleanSession=false) until that client reconnects.
If you are looking to persist all messages for later consumption you will have to write a client to subscribe and store this data in a DB of your choosing. One possible option to do this quickly and simply is Node-RED, but there are others and some brokers even have plugins for this e.g. HiveMQ.
If you really want to read it then you will probably have to write your own tool to do this based on the Mosquitto src code

Enqueuing JMS message directly into a Oracle persisent store

Is there a way to enqueue a JMS message into an Oracle table that is used as a persistent store for WebLogic JMS Server?
Thanks!
When you create a JMS Server, it will ask you to configure a persistent store. If you configure and use a JDBCStore (vs. a FileStore) it will ask for a database connection and create a table there called WL_Store, which it will use to store messages.
Are you asking if you can manually write a message into the WL_Store table?
You have yourself mentioned AQ, why not continue using AQ and configure WLS to enable consumption messages from AQ itself.
Its not advisable to store messages into the JMS JDBC store. JMS JDBC Store not only stores the messages by a bunch of extra information like message state, destination information and so on which wont be straight forward to push programatically.
Oracle hasnt provided a way to do this in their documentation anyways.

Publish-subscribe over clients in Progress 4GL

Is is there some way to put a publish between clients in the network in Progress 4GL.
An (ugly) way would be to "publish" (write) to the db and let all clients poll the db - but of course I would like to avoid that.
I am using in Progress OpenEdge Release 10.0B02.
No. There is no way to use the built-in PUBLISH and SUBSCRIBE statements across a session boundary.
Its one of those things that people ask product management for from time to time but it never seems to make it onto the planned feature list.
You may be able to use a JMS like Apache ActiveMQ for your purposes. The publisher would be known as a producer, and the subscriber would be known as a consumer. ActiveMQ supports the STOMP protocol; there is an open source OpenEdge ABL framework I wrote that will allow you to create a producer or consumer in pure ABL using STOMP frames.

Resources