I have no knowledge on Oracle EBS and Oracle Alert mechanism.
My understanding is that Oracle Alert works just like database trigger.
Will Oracle Alert fire when database updates/inserts happen from the back-end?
We have observed alert is only firing for transactions front end and not running for back-end updates?
Is it guaranteed that just like a Trigger, EBS Alert will fire on every update to the record?
My understanding is that Oracle Alert works just like database trigger.
Yes, it is somewhat like a database trigger created from the Front-End Application. To explain further, there are two types of Oracle Alerts, Periodic and Event Alerts.
Periodic Alerts are alerts have a specific schedule and run according to a set period and time.
Event Alerts are alerts that only send notifications whenever inserts or updates have been performed on a table from the Front-End Application.
Take note that for Event Alerts, the triggering table must be setup in Oracle EBS' Application Object Library (called an Application Table).
Will Oracle Alert fire when database updates/inserts happen from the backend?
No. Taking this line from Krishna Reddy:
Oracle Alerts can only be triggered from an application that has been
registered in Oracle Applications. Alerts cannot be triggered via SQL
updates or deletes to an Alert activated trigger.
To add more context, Oracle Alert is a simple and efficient way to give you an immediate view of the critical activities in your Oracle Application. It helps Business Users / Administrators be on top of important or unusual business events you need to know about via E-Mail. It can also automate a process depending on the user’s response.
Some weaknesses and limitations though, is that Oracle Alert cannot process rows up to more than around 50, and its Report Layout has a text-based design and does not support HTML. Also, the text width is also limited.
Check out the Oracle documentation and this good article about Oracle Alerts.
Related
we are migrating from a legacy monolith application to a microservice architecture. we use CQRS and event sourcing pattern and message broker (rabbitmq) for communication mechanism. now We are facing a challenge about how can convert the old database to new architecture and how can use event sourcing for these? Assuming the old database did not have events, can we do the data conversion without creating events? what is the start point of our old database data in the event sourcing pattern?
One important thing to remember is that many databases internally event source: every write goes to a log and that log is used to update tables, replicate etc., after which the log is truncated. It's equivalent to event sourcing with a lot of snapshots and very little retention of events and old snapshots.
In these databases (which include the likes of Postgres, MySQL, Oracle, SQL Server, Cassandra, CosmosDB, to name ones I know from experience do this), there's a technique called Change Data Capture which essentially taps into the log and exposes a stream of changes to the database which can be treated as events from the database (or by extension as commands: "one service's events are another service's commands"). Debezium can be used to write CDC records to Kafka; for RabbitMQ you may need to roll something yourself, in which case you'll want to get acquainted with how CDC is exposed in your database.
Even if the database doesn't support CDC, if the data isn't that large, you can often turn it into an ersatz event stream by periodically dumping its data (if the records are timestamped, this can even work if the data is particularly slow moving) and implementing a service to track what changed: this won't tell you about changes that netted out, but it's often better than nothing. This sort of dump is also likely to be required if you need a "genesis" event to ensure that your initial state is current to when you moved to event-sourcing or CDC.
This whole broad family of techniques has limitations compared to full event sourcing: reifying what changed is not as valuable as reifying what changed and why it changed. But it can be a useful middle ground in migrating to event-sourcing.
By referring #alexey-zimarev's answer at this post, it's essential to have the starting event in your event sourced database. You can not configure an event-sourced aggregate without replaying its events. Therefore, you need to map the legacy snapshot to an individual domain event of your relevant aggregate.
Either the way, considering event souring definition by Martin Fowler:
The fundamental idea of Event Sourcing is that of ensuring every
change to the state of an application is captured in an event object,
and that these event objects are themselves stored in the sequence
they were applied for the same lifetime as the application state
itself.
So that, it's not an appropriate solution to migrate legacy snapshots into the newer one without extracting and storing domain events. It will turn your event-sourced project into a semi-event-sourced project which is not considered as a paradigm to design and develop.
You have an event store that is a database for events. you can create event data that you need for the old database and insert into the event store. After that, do event replaying for creating read models.
We are using a Code and if that code is used we want a report to automatically send out.
Sales Code (if this sales code is used send out report)
This is used for a check method to ensure that sales code is not used inproperly.
Not sure how to do do this in cognos.
Thanks in advance,
Nathan
Event Studio might be the way to go here.
Use IBM® Cognos® Event Studio to notify decision-makers in your organization of events as they happen, so that they can make timely and effective decisions.
You create agents that monitor your organization's data to detect occurrences of business events. An event is a situation that can affect the success of your business. An event is identified when specific items in your data achieve significant values. Specify the event condition, or a change in data, that is important to you. When an agent detects an event, it can perform tasks, such as sending an e-mail, adding information to the portal, and running reports.
https://www.ibm.com/support/knowledgecenter/en/SSEP7J_11.1.0/com.ibm.swg.ba.cognos.ug_cr_es.doc/c_event_studio.html
Is there a practical way for my app to get notified when Heroku Connect adds records to a table?
I currently have a Flask app connected to a Salesforce org via Heroku Connect. I have event listeners for before_insert, after_insert, before_update, after_update. Additionally, SQLALCHEMY_ECHO is set to True. When I create a record in Salesforce, none of the event listeners fire, and no SQL statements are printed. However, if I query the model that matches the mapped sObject, I can see the new record. Therefore, Heroku Connect must be updating the table, but in a way that doesn't trigger event listeners. I did read up a bit on pg_notify (LSITEN), but all solutions seem to involve a select loop, which is much less elegant than db.event.listens_for decorators.
Can somebody help me how to send Notifications using Azure SQL Server ?
Lets say I want to create an Application where the user adds, updates, or deletes a schedule .
If the scheduler runs and finds that it has to send notifications for a particular time, say 6:00 pm . I was wondering if there is any way I can use SQL Server so it can send the notification when called by the scheduled job. ?
I believe, it is not possible with the SQL Azure. In SQL Server, you have Query Notifications. If you want to use SQL Azure, then i would offer you to implement the notifications functionality in your own application. So, application, when making the changes to the database, sends the notification as well. Somehow, for example, using Azure Queues - with the information about columns updated.
This is from the Stream - AQ docs.
You can register system events, user events, and notifications on queues with Oracle Internet Directory. System events are database startup, database shutdown, and system error events. User events include user log on and user log off, DDL statements (create, drop, alter), and DML statement triggers. Notifications on queues include OCI notifications, PL/SQL notifications, and e-mail notifications.
Sounds interesting. What does this get me?
I mean these things look like DDL Triggers... So it's a matter of not building the DDL trigger in a database but instead building it in OID and letting OID manage the firing of the trigger?
Having never used it, this is my guess.
Imagine you have a hundred databases, and you want to log every time people log into each one, you could do it on each individual server, but that would make answering questions like "Which databases did 'Mark' Login to" difficult".
So, instead, you have each database register its "user logon" events with OID (via AQ), you then have a process receiving these events from OID and logging them. You then have a single point where you can audit system wide logins.
You can likely also use it to propagate messages from one AQ to another, and to lookup what queues exist within the system which can be subscribed to.