Ncqrs: How to store events as part of test set-up - tdd

How do I store events as part of setting up my tests?
Currently I'm initializing application state by sending commands like this:
Given some commands were sent
When sending another command
Then some events should have been published
I'm using ICommandService.Execute() to send the Commands in the Given and When parts.
Since commands can be rejected by the domain, I wouldn't want to rely on them. I'd rather set up my application state by simulating events like this:
Given _some events_ occurred
When sending a command
Then some events should have been published
How do I push the events from Given into the event store so that they can be replayed during handling the "When" part?
Thanks
Dennis

Have been given the answer on the mailing list and will add this for further reference:
I was using an old version of Ncqrs. The current version exposes Ncqrs.Eventing.Storage.IEventStore.Store() which takes an event stream and can be uses during test set-up just as needed.

Related

Using Spring or Lambda for bulk event trigger

Looking for some help on an application design. I am using spring framework and hosting application in AWS.
I am working on an enterprise Java Web application that is suppose to handle events when their trigger time is reached. For example, consumers can set an event to begin on 12/20/22 at 07:35 AM, and system is suppose to send a notification when that time is reached.
I can store these events in a database along with their trigger time and setup a Spring scheduler (#Scheduler) to run every minute and process events whose trigger time is reached. My only concern with this approach is, there could be hundreds/thousands of event to trigger at any minute, and it cannot be processed within one minute.
Is there any alternate way to design this? I don't know if Spring offers a feature where I could create these Event, and Frameworks trigger these events when trigger time is reached. In that way, I can stay away from managing Scheduling and Triggering part.
I am using AWS to host this applications, so another option I'm thinking towards is creating an AWS lambda for every such Event, and let AWS manage the triggering part. In that way, I can stay away from managing the triggers.
Let me know your views? Or If you came across similar problems and how you resolved that?
You can consider using spring-cloud-dataflow to manage this as tasks and streams.
You create a custom batch application that will use #Scheduled to check the your database when events are dure and then send events to a stream. You can use Spring Integration APIs to interact with RabbitMQ or Kafka topics.
The event should contain enough information needed to process the event.
You then have a stream application that produces the content and send via email or pass it on to a separate stream app that sends the email.
https://dataflow.spring.io/docs/stream-developer-guides/programming-models/
The flow will look something like:
:mail_events | message-processor | message-sender
You will configure property for mail_events to match the topic created and configured for you mail-event-batch application.
You can use Spring Cloud Data Flow to manage the mail-event-batch application as well.
You can scale each application https://dataflow.spring.io/docs/recipes/scaling/

Count open socket and channel connections in a Phoenix application

Is there a relatively simple, documented way in a Phoenix application to read how many active sockets and channels are currently open at any given time? And more specifically, is it possible to filter this data by topic and other channel connection metadata?
My use case is for analytics on active connections to my backend.
Thanks for any suggestions!
You are looking for Phoenix.Presence. From the documentation:
Provides Presence tracking to processes and channels.
This behaviour provides presence features such as fetching presences for a given topic, as well as handling diffs of join and leave events as they occur in real-time. Using this module defines a supervisor and allows the calling module to implement the Phoenix.Tracker behaviour which starts a tracker process to handle presence information.
Basically, you are supposed to implement Phoenix.Presence behaviour (the almost ready-to-go example is there in docs,) and Phoenix.Tracker according to your needs.

Trigger a shell script on mail arrival

How can I trigger a shell script on an email arrival that extracts the mail in a text file? I want to extract the information in the mail, process it to determine the request and send an automated response to that request. The mail will basically consist of a data request and the response will have the requested data in a text file attached to it.
Look into the documentation of your MTA (mail transfer agent). Many of them allow to run scripts or hooks when mail arrives and certain other conditions are met.
If you're using Linux and want a pure client solution (i.e. independent of the mail server software), then you should look at procmail. The documentation contains lots of useful tips and hints how to set up the tool (like performance considerations) and how to properly set up the environment so your script executes correctly.
It also contains examples like a service which responds to "ping" mails.

How to perform process interrupts in jBPM based on business data

I've been doing some research into BPM solutions and am looking to hopefully use jBPM to achieve my goal. I am aware it is possible to start a process instance with an event signal sent to the process engine, but I would like to be able to interact with process instances currently running in that engine WITHOUT knowing their instance ID.
I am aiming to achieve this in an interrupt fashion by sending an event to the process engine, with business data, that will match to the process instance containing that specific match in business data (for instance a customer number unique to a process instance).
I have not yet been able to figure out how to do this, another of my goals is to expose this via REST/SOAP, and I am aware that this functionality is NOT currently implemented in the jBPM5 console REST interface.
How would I go about doing this, what are the standard patterns for doing so, or what other process engines should I be looking at to achieve this?
Yeah, you can achieve that with jbpm and I would recommend you to check jbpm6 CR2..
In order to do what you need you can start multiple processes inside a KieSession and then send your customer as the payload of your event. Only the process that has that customer will catch the event ( if it's modeled correctly with the catch event node that filter by customer).
The Rest endpoints are already there in jbpm6.
Hope it helps

How can I implement Pre- and Post-Commit Hooks in Riak?

There is but scant information on the web as to how to actually implement these features of Riak besides this blog post and a few others. Are any client libraries (ripple etc.) capable of receiving messages via the hook so that working with the changed data in the app (i.e. outside of Riak) becomes possible? Thanks.
It's not possible to have Riak call back into your application, however if you use the "returnbody" option when storing, you'll get back the value that was actually stored as modified by pre-commit hooks.
Post-commit hooks are run asynchronously after the object is stored and so should not be used to modify the stored object. One way you might get "messages via the hook" would be to have your post-commit hook post messages to RabbitMQ (or some other queue), which your application could then consume and do its own processing.
I hope that gives you an idea of where to start. In the meantime, we'll add some examples to that wiki page.

Resources