something basic I'm missing.
Here is my pseudo scenario:
Say my flow can begin with either State A or State B.
If flow began with State A then it should transit to State B triggering event S.
If flow began with State B then it should transit to State A triggering event D.
How can I build such configuration?
Here's the pseudo code fot that pseudo scenario:
if (stateA) {
applyStateB()
triggerEventS()
} else if (stateB) {
applyStateA()
triggerEventD()
}
Related
How can I guarantee linearizability of requests in Reactor Netty?
Theory:
Given:
Request A wants to write x=2, y=0
Request B wants to read x, y and write x=x+2, y=y+1
Request C wants to read x and write y=x
All Requests are processed asynchronously and return to the client immediately with status ACCEPTED.
Example:
Send requests A, B, C in order.
Example Log Output: (request, thread name, x, y)
Request A, nioEventLoopGroup-2-0, x=2, y=0
Request C, nioEventLoopGroup-2-2, x=2, y=2
Request B, nioEventLoopGroup-2-1, x=4, y=3
Business logic requires all reads after A to see x=2 and y=0.
And request B to see x=2, y=0 and set y=1.
And request C to see x=4 and set y=4.
In short: The business logic makes every next write operation dependent on the previous write operation to be completed. Otherwise the operations are not reversible.
Example Code
Document:
#Document
#Data
#NoArgsConstructor
#AllArgsConstructor
public class Event {
#Id
private String id;
private int data;
public Event withNewId() {
setId(UUID.randomUUID().toString());
return this;
}
}
Repo:
public interface EventRepository extends ReactiveMongoRepository<Event, String> {}
Controller:
#RestController
#RequestMapping(value = "/api/event")
#RequiredArgsConstructor
public class EventHandler {
private final EventRepository repo;
#PostMapping
public Mono<String> create(Event event) {
return Mono.just(event.withNewId().getId())
.doOnNext(id ->
// do query based on some logic depending on event data
Mono.just(someQuery)
.flatMap(query ->
repo.find(query)
.map(e -> event.setData(event.getData() + e.getData())))
.switchIfEmpty(Mono.just(event))
.flatMap(e -> repo.save(e))
.subscribeOn(Schedulers.single())
.subscribe());
}
}
It does not work, but with subscribeOn I try to guarantee linearizability. Meaning that concurrent requests A and B will always write their payload to the DB in the order in which they are received by the server. Therefore if another concurrent request C is a compound of first read than write, it will read changes from the DB that reflect those of request B, not A, and write its own changes based of B.
Is there a way in Reactor Netty to schedule executors with an unbound FIFO queue, so that I can process the requests asynchronously but in order?
I don't think that this is specific to Netty or Reactor in particular, but to a more broad topic - how to handle out-of-order message delivery and more-than-once message delivery. A few questions:
Does the client always sends the same number of requests in the same order? There's always a chance that, due to networking issues the requests may arrive out of order, or one or more may be lost.
Does the client make retries? What happens if the same request reaches the server twice?
If the order matters, why doesn't the client wait for the result of the nth-1 request, before issuing nth request? In other words, why there are many concurrent requests?
I'd try to redesign the operation in such a way that there's a single request executing the operations on the backend in the required order and using concurrency here if necessary to speed-up the process.
If it's not possible, for example, you don't control the client, or more generally the order in which the events (requests) arrive, you have to implement ordering on application-level logic using per-message semantics to do the ordering. You can, for example store or buffer the messages, waiting for all to arrive, and when they do, only then trigger the business logic using the data from the messages in the correct order. This requires some kind of a key (identity) which can attribute messages to the same entity, and a sorting-key, that you know how to sort the messages in the correct order.
EDIT:
After getting the answers, you can definitely implement it "the Reactor way".
Sinks.Many<Event> sink = Sinks.many() // you creat a 'sink' where the events will go
.multicast() // broads all messages to all subscribes of the stream
.directBestEffort(); // additional semantics - publishing will fail if no subscribers - doesn't really matter here
Flux<Event> eventFlux = sink.asFlux(); // the 'view' of the sink as a flux you can subscribe to
public void run() {
subscribeAndProcess();
sink.tryEmitNext(new Event("A", "A", "A"));
sink.tryEmitNext(new Event("A", "C", "C"));
sink.tryEmitNext(new Event("A", "B", "B"));
sink.tryEmitNext(new Event("B", "A", "A"));
sink.tryEmitNext(new Event("B", "C", "C"));
sink.tryEmitNext(new Event("B", "B", "B"));
}
void subscribeAndProcess() {
eventFlux.groupBy(Event::key)
.flatMap(
groupedEvents -> groupedEvents.distinct(Event::type) // distinct to avoid duplicates
.buffer(3) // there are three event types, so we buffer and wait for all to arrive
.flatMap(events -> // once all the events are there we can do the processing the way we need
Mono.just(events.stream()
.sorted(Comparator.comparing(Event::type))
.map(e -> e.key + e.value)
.reduce(String::concat)
.orElse(""))
)
)
.subscribe(System.out::println);
}
// prints values concatenated in order per key:
// - AAABAC
// - BABBBC
See Gist: https://gist.github.com/tarczynskitomek/d9442ea679e3eed64e5a8470217ad96a
There are a few caveats:
If all of the expected events for the given key don't arrive you waste memory buffering - unless you set a timeout
How will you ensure that all the events for a given key go to the same application instance?
How will you recover from failures encountered mid-processing?
Having all this in mind, I would go with a persistent storage - say saving the incoming events in the database, and doing the processing in background - for this you don't need to use Reactor. Most of the time a simple Servlets based Spring app will be far easier to maintain and develop, especially if you have no previous experience with Functional Reactive Programming.
Looking at the provided code I would not try to handle it on Reactor Netty level.
At first, several comments regarding controller implementation because it has multiple issues that violate reactive principles. I would recommend to spend some time learning reactive API but here are some hints
In reactive nothing happens until you subscribe. At the same time calling subscribe explicitly is an anti-pattern and should be avoided until you are creating framework similar to WebFlux.
parallel scheduler should be used to run non-blocking logic until you have some blocking code.
doOn... are so-called side-effect operators and should not be used for constructing reactive flows.
#PostMapping
public Mono<String> create(Event event) {
// do query based on some logic depending on event data
return repo.find(query)
.map(e -> event.setData(event.getData() + e.getData()))
.switchIfEmpty(Mono.just(event))
.flatMap(e -> repo.save(e));
}
Now, processing requests in the predefined sequence could be tricky because of network failures, possible retries, etc. What if you never get Request B or Request C? Should you still persist Request A?
As #ttarczynski mentioned in his comment the best option is to redesign API and send single request.
In case it's not an option you would need to introduce some state to "postpone" request processing and then, depending on consistency semantic, process them as a "batch" when the last request is received or just defer Request C until you get Request A & B.
I have a method that may be used in multiple goroutines and run concurrently.
Inside this method, I have a conditional statement. If the conditional statement is true, I want all other goroutines calling this method to wait for one and only one of the goroutines to execute this conditional statement before proceeding to the next section.
For example:
type SomeClass struct {
mu sync.Mutex
}
func (c *SomeClass) SomeFunc() {
//Do some calculation
if condition {
//This part should be executed by only one goroutine if the condition is true.
//All others must wait for this to finish
}
//Additional calculations
}
And I want to use it like this:
func main(){
//initilize
go someClass.SomeFunc()
//If the condition is true, the following will wait at the conditional statement until the first one finishes the code inside the conditional block
//Once it's done, they can run concurrently
go someClass.SomeFunc()
go someClass.SomeFunc()
}
Edit
This is perhaps not the right design for this so I'm looking for any suggestions on how to implement this.
Edit2:
Note that each routine will have its own condition. This value of condition is not shared between threads. However, the work inside the condition should run only once only if the condition in 2 or more routines happens to be true at the same time.
You'll want a mutex protecting the condition from concurrent read/writes, and then a method for resetting the condition when you wish to execute the synchronous code again.
type SomeClass struct {
conditionMu sync.Mutex
condition bool
}
func (c *SomeClass) SomeFunc() {
// Lock the mutex, so that concurrent calls to SomeFunc will wait here.
c.conditionMu.Lock()
if c.condition {
// Synchronous code goes here.
// Reset the condition to false so that any waiting goroutines won't run the code inside this block again.
c.condition = false
}
// Unlock the mutex, and any waiting goroutines.
c.conditionMu.Unlock()
}
// ResetCondition sets the stored condition to true in a thread-safe manner.
func (c *SomeClass) ResetCondition() {
c.conditionMu.Lock()
c.condition = true
c.conditionMu.Unlock()
}
The other answers to this question were incorrect because they do not satisfy the requirements of the question.
If the lock is added outside the conditional statement, then it will act as a barrier and will force all routines to synchronize at that spot. This is not the point of this question. Suppose resolving the condition value takes a long time, we do not want to check the value one routine at a time. We want to let every process check the condition at once so if the condition is false, we can move forward without stopping.
We want to ensure that the goroutines run in parallel if the condition is not true. Adding a lock inside the method and outside the conditional statement will not allow that to happen.
The following solutions are correct and passed all tests and performed well.
Solution 1:
Use 2 nested conditional statement such as this:
Note that in this case, if the condition is false, no lock will be called and no synchronization is needed. Everything can run in parallel.
type SomeClass struct {
conditionMu sync.Mutex
rwMu sync.RWMutex
additionalWorkRequired bool
}
func (c *SomeClass) SomeFunc() {
//Do some work ...
//Note: The condition is not shared, some routines can have false and some true at the same time, which is fine.
condition := true;
// All routines can check this condition and go inside the block if the condition is true
if condition {
c.rwMutex.Lock()
c.additionalWorkRequired = true
c.rwMutex.Unlock()
//Lock so other routines can wait here for the first one
c.conditionMu.Lock()
if c.additionalWorkRequired {
// Synchronous code goes here.
c.additionalWorkRequired = false
}
//Unlock so all other processors can move forward in parallel
c.conditionMu.unlock()
}
//Finish up the remaining work
}
Solution 2:
Use the do function from sync/singleflight which can handle this situation automatically.
From documentation:
Do executes and returns the results of the given function, making sure that only one execution is in-flight for a given key at a time. If a duplicate comes in, the duplicate caller waits for the original to complete and receives the same results. The return value shared indicates whether v was given to multiple callers.
Edit:
Since many seem to be confused by this question and answer, I'm adding a use case which might make things more clear:
1. Send a HTTP Request
2. If the server returns an error saying credentials are incorrect (This is condition):
2.1. Save current credentials in a local variable
2.2. Acquire the mutex lock
2.2.1. Compare the shared credentials with the ones in the local variable(This is the second condition)
If they are the same, then replace them with new ones
2.3. Unlock
2.4. Retry request
I am using CANoe 11.0 to send a signal value from a button.
I have a message from a CAN db with 6 signals, 8 bits for each signal. The message is cyclic but with a cycle time of 0ms, so, in order to send it, I figured out I need a button. But everything I tried so far doesn't work.
eg:
on message X
{
if (getValue(ev_button) == 1)
{
X.signalname = (getValue(ev_signalvariable));
}
}
or I tried working on the signal itself:
on signal Y
{
if (getValue(ev_button) == 1)
{
putValue(ev_signalY,this);
}
}
The issue you are having is due to the callback. Both on message and on signal callbacks happen when that message or signal is updated on the bus.
In your code, you expect to update a signal, if you pressed a button, but only if you detect that signal was updated in the first place. Do you see the loophole?
To fix this, you may create a system variable, associate that with the button (so that it is 0 = not pressed and 1 = pressed), then use the on sysvar callback:
on sysvar buttonPressed
{
// prepare message
// send message
}
I assume you already have something like message yourMessage somewhere, and that you know the name of the signal from the DBC and that the DBC is linked to your configuration. So you'll need to:
// prepare message
yourMessage.yourValue1 = <some value>
yourMessage.yourValue2 = <some other value>
// ...
// repeat for all relevant signals
and then
// send message
send(yourMessage)
I am reading about event driven programming from the book:
Practical UML Statecharts in C/C++, 2nd Edition:
Event-Driven Programming for Embedded Systems
On page no. xxviii Introduction , the author says:
...the event-driven application must return control after handling
each event, so the execution context cannot be preserved in the
stack-based variables and the program counter as it is in a sequential
program. Instead, the event-driven application becomes a state
machine, or actually a set of collaborating state machines that
preserve the context from one event to the next in the static
variables.
I am unable to understand why the execution context cannot be preserved in the stack-based variables and the program counter once the control is returned after handling the event?
Let's start with how the traditional sequential programming paradigm works. Suppose that you want to blink an LED on an embedded board. A common solution would be to write a program like this (e.g., see Arduino Blink tutorial):
while (1) { /* RTOS task or a "superloop" */
turn_LED_on(); /* turn the LED on (computation) */
delay(500); /* wait for 500 ms (polling or blocking) */
turn_LED_off(); /* turn the LED off (computation) */
delay(1000); /* wait for 1000 ms (polling or blocking) */
}
The key point here is the delay() function, which waits in-line until the delay elapses. This waiting is called "blocking", because the calling program is blocked until delay() returns.
Please note that the Blinky program calls delay() in two different contexts: first time after turn_LED_on() and the second time after turn_LED_off(). Each time, delay() returns to a different place in the code. This means that while the program is blocked, the information about the place in the code (the context of the call) is automatically preserved.
The trivial Blinky program is very simple, but in principle a blocking function, like delay(), could be called from other functions each with
complex if-else-while code. Still, delay() will be able to return to the exact point of the call, because the C programming language preserves the context of the call (in the call stack and the program counter).
But blocking makes the whole program unresponsive to any other events and therefore people came up with event-driven programming.
An event-driven program is structured around an event-loop. An example event-driven code could look like this:
while (1) { /* event-loop */
Event *e = queue_get(); /* block when event queue is empty */
dispatch(e); /* handle the event, cannot block! */
}
The main point is that the dispatch() "event-handler" function cannot call a blocking function like delay(). Instead, dispatch() can only perform some immediate action and must quickly return back to the event-loop. That way, the event-loop remains responsive at all times.
But, by returning the dispatch() function removes its own stack frame from the call stack. So the call stack and program counter associated with calling dispatch() is always the same and is useless to "remember" the execution context.
Instead, to blink the LED, the dispatch() function must rely on some variable (state) that remembers the state (on/off) of the LED. An example how you could write such dispatch() function is as follows:
static enum {OFF, ON } state = OFF; /* start in the OFF state */
timer_arm(1000); /* arm a timer to generate TIMEOUT event in 1000 ms */
void dispatch(Event *e) {
switch (state) {
case OFF:
if (e->sig == TIMEOUT) {
turn_LED_on();
timer_arm(500);
state = ON; /* transition to "ON" state */
}
break;
case ON:
if (e->sig == TIMEOUT) {
turn_LED_off();
timer_arm(1000);
state = OFF; /* transition to "OFF" state */
}
break;
}
}
I hope you can see that dispatch() implements a state machine with states ON and OFF driven by one event TIMEOUT.
I apologize at the outset as this is more of a design question rather than a specific problem so it may not have a simple answer .. Anyway I am developing quite a complex piece of code to process transactions in a warehouse system. The transactions themselves are user defined with switches which determine fields to enter on a screen .. I use a command object to process / validate the form. Many of the validation steps are straightforward but some are a little tricky and have cross dependencies between the screen fields .. So for instance I may prompt for the user to enter a serial reference but if that doesn't uniquely identify a carton I need to ask for a part to go with it .. I then hit the database to validate these combinations .. The screen may also have a document reference for the user to enter .. This in turn may be cross referenced with the part / serial on the screen (by hitting the database) to ensure that the part/serial is for the document .. This continues, depending on what's been defined on the transaction, with more cross references and validations. What I end up with is a lot of 'if this entered and this entered .. validate .. else .. validate' type stuff in the command object and it looks really ugly .. So my questions are : Should I be putting ALL my validation in the command Object (all the database checking etc) or is there somewhere better to put it and is there anything I can do better than all my complicated if/else combos given that this is a command object ? I had thought of creating additional classes which the current command object could kinda spawn , making those validatable and spreading the logic out amongst those .. If anyone would like to chip in i'd appreciate the discussion ..
After Joshua's comments i've started refactoring the code into a service .. It's not complete but it's taking shape ..
#Transactional
class TransactionValidationService {
static enum state {
RECEIPT,
RECEIPT_SERIAL,
RECEIPT_DOCUMENT,
RECEIPT_PART,
RECEIPT_SERIAL_PART,
RECEIPT_SERIAL_DOCUMENT,
RECEIPT_PART_DOCUMENT,
ISSUE,
TRANSFER
}
def validateTransaction(TransactionDetailCommand transaction) {
// Set initial state ..
def currentState
switch (transaction.transactionType.processType) {
case ProcessType.ISSUE:
currentState = state.ISSUE
break
case ProcessType.RECEIPT:
currentState = state.RECEIPT
break
case ProcessType.TRANSFER:
currentState = state.TRANSFER
}
switch (currentState) {
case state.RECEIPT:
if (transaction.serialReference) {
// validateSerialReference
currentState = state.RECEIPT_SERIAL
} else if (transaction.documentHeader) {
// validateReceiptDocument
currentState = state.RECEIPT_DOCUMENT
}
break
case state.RECEIPT_SERIAL:
if (transaction.part) {
// validatePartSerial
currentState = state.RECEIPT_SERIAL_PART
}
if (transaction.documentHeader) {
// validateDocumentPart
currentState = state.RECEIPT_SERIAL_DOCUMENT
}
break
case state.ISSUE:
break
case state.TRANSFER:
break
}
}
}
First off, using command objects to gather this information is the correct choice.
However, implementing complex validation logic within the constraints as custom validators can be a bit overwhelming. You may want to consider injecting a service into your command object then delegating the validation to the service from within the custom validator.
For example
#Validateable
#ToString(includeNames=true)
class MyExampleCommand {
def myValidationService = Holders.grailsApplication.mainContext.myValidationService
String someThing
Long someValue
..
static constraints = {
someThing(
nullable: false,
blank: false,
size:1..20,
validator: { val, obj ->
obj.myValidationService.validateSomeThing(obj)
}
)
...
}
...
}