Spring State Machine - How to navigate to an error final state if any action has an error - spring-statemachine

Let us say we have 3 states S1, S3, S3 and an error state E1. The S* states have actions that execute A1, A2, A3 respectively. If any of A1, A2, A3 results in an exception being thrown, the state machine should go to E1. Is there a way I can define a common transition that says if any action throws an exception, go to E1? Or do I have to explicitly state the transition for each state
states.withStates()
.initial(START)
.end(END)
.end(ERROR)
.state(S1, A1())
.state(S2, A2())
.state(S3, A3())
//Is this the only way of defining error transition? Can I dry it up?
transitions
.withExternal()
.source(S1)
.event(ErrorEvent)
.target(E1)
.and()
.withExternal()
.source(S2)
.event(ErrorEvent)
.target(E1)
.and()
.withExternal()
.source(S3)
.event(ErrorEvent)
.target(E1)
I was hoping for a way to dry this up somehow. If I have 10 states, then it becomes rather too repetitive to do it this way.

There's currently no common way using the State Machine builder to do that.
By definition each state transition should be explicitly defined in the configuration.
It's another question to discuss if this is a correct approach in the SM world:
If any of A1, A2, A3 results in an exception being thrown, the state
machine should go to E1.
It's debatable if the SM should go to an error state or should stay in the same state or revert to a previous state and so on.
My understanding is that having a common error state to transition to, when error happens in any other state, is not a good approach.
It really depends on what you need to do with the error.
For example if you want to signal on the calling code or propagate the error outside of the stateMachine, you can use the StateContext or the SM ExtendedContext for that.
You can insert a custom flag (e.g. "hasError") and the Exception itself, and expose that to your caller code, but this is not very elegant. Someone should check the presence of that flag on each invocation and notify the caller code (or the caller code itself should check that, and it's ugly).
Another approach would be to clear the stateContext and stay in the same state, waiting for some retry logic to kick in.
Or in your errorAction you can send the ErrorEvent to your SM and pass whatever info is needed in the SM Context, which I believe is the approach you've chosen.
Either way - all transitions should be present in the StateMachineModel.
This model can be created using the builder approach, or you can create it programatically (where will be easier for example to iterate across all existing states and just define 1 extra transition to the error state).
#Override
public StateMachineModel<String, String> build() {
ConfigurationData<String, String> configurationData = new ConfigurationData<>();
Collection<StateData<String, String>> stateData = new ArrayList<>();
stateData.add(new StateData<String, String>("S1", true));
stateData.add(new StateData<String, String>("S2"));
stateData.add(new StateData<String, String>("S3"));
stateData.add(new StateData<String, String>("ERROR_STATE"));
StatesData<String, String> statesData = new StatesData<>(stateData);
Collection<TransitionData<String, String>> transitionData = new ArrayList<>();
transitionData.add(new TransitionData<String, String>("S1", "S2", "E1"));
stateData.forEach(state -> {
if (state.getState() != ERROR_STATE) {
transitionData.add(new TransitionData<String, String>(state.getState(), "ERROR_STATE", "ERROR_EVENT"));
}
});
TransitionsData<String, String> transitionsData = new TransitionsData<>(transitionData);
StateMachineModel<String, String> stateMachineModel = new DefaultStateMachineModel<String, String>(configurationData,
statesData, transitionsData);
return stateMachineModel;
}

Related

Linearization in Reactor Netty (Spring Boot Webflux)

How can I guarantee linearizability of requests in Reactor Netty?
Theory:
Given:
Request A wants to write x=2, y=0
Request B wants to read x, y and write x=x+2, y=y+1
Request C wants to read x and write y=x
All Requests are processed asynchronously and return to the client immediately with status ACCEPTED.
Example:
Send requests A, B, C in order.
Example Log Output: (request, thread name, x, y)
Request A, nioEventLoopGroup-2-0, x=2, y=0
Request C, nioEventLoopGroup-2-2, x=2, y=2
Request B, nioEventLoopGroup-2-1, x=4, y=3
Business logic requires all reads after A to see x=2 and y=0.
And request B to see x=2, y=0 and set y=1.
And request C to see x=4 and set y=4.
In short: The business logic makes every next write operation dependent on the previous write operation to be completed. Otherwise the operations are not reversible.
Example Code
Document:
#Document
#Data
#NoArgsConstructor
#AllArgsConstructor
public class Event {
#Id
private String id;
private int data;
public Event withNewId() {
setId(UUID.randomUUID().toString());
return this;
}
}
Repo:
public interface EventRepository extends ReactiveMongoRepository<Event, String> {}
Controller:
#RestController
#RequestMapping(value = "/api/event")
#RequiredArgsConstructor
public class EventHandler {
private final EventRepository repo;
#PostMapping
public Mono<String> create(Event event) {
return Mono.just(event.withNewId().getId())
.doOnNext(id ->
// do query based on some logic depending on event data
Mono.just(someQuery)
.flatMap(query ->
repo.find(query)
.map(e -> event.setData(event.getData() + e.getData())))
.switchIfEmpty(Mono.just(event))
.flatMap(e -> repo.save(e))
.subscribeOn(Schedulers.single())
.subscribe());
}
}
It does not work, but with subscribeOn I try to guarantee linearizability. Meaning that concurrent requests A and B will always write their payload to the DB in the order in which they are received by the server. Therefore if another concurrent request C is a compound of first read than write, it will read changes from the DB that reflect those of request B, not A, and write its own changes based of B.
Is there a way in Reactor Netty to schedule executors with an unbound FIFO queue, so that I can process the requests asynchronously but in order?
I don't think that this is specific to Netty or Reactor in particular, but to a more broad topic - how to handle out-of-order message delivery and more-than-once message delivery. A few questions:
Does the client always sends the same number of requests in the same order? There's always a chance that, due to networking issues the requests may arrive out of order, or one or more may be lost.
Does the client make retries? What happens if the same request reaches the server twice?
If the order matters, why doesn't the client wait for the result of the nth-1 request, before issuing nth request? In other words, why there are many concurrent requests?
I'd try to redesign the operation in such a way that there's a single request executing the operations on the backend in the required order and using concurrency here if necessary to speed-up the process.
If it's not possible, for example, you don't control the client, or more generally the order in which the events (requests) arrive, you have to implement ordering on application-level logic using per-message semantics to do the ordering. You can, for example store or buffer the messages, waiting for all to arrive, and when they do, only then trigger the business logic using the data from the messages in the correct order. This requires some kind of a key (identity) which can attribute messages to the same entity, and a sorting-key, that you know how to sort the messages in the correct order.
EDIT:
After getting the answers, you can definitely implement it "the Reactor way".
Sinks.Many<Event> sink = Sinks.many() // you creat a 'sink' where the events will go
.multicast() // broads all messages to all subscribes of the stream
.directBestEffort(); // additional semantics - publishing will fail if no subscribers - doesn't really matter here
Flux<Event> eventFlux = sink.asFlux(); // the 'view' of the sink as a flux you can subscribe to
public void run() {
subscribeAndProcess();
sink.tryEmitNext(new Event("A", "A", "A"));
sink.tryEmitNext(new Event("A", "C", "C"));
sink.tryEmitNext(new Event("A", "B", "B"));
sink.tryEmitNext(new Event("B", "A", "A"));
sink.tryEmitNext(new Event("B", "C", "C"));
sink.tryEmitNext(new Event("B", "B", "B"));
}
void subscribeAndProcess() {
eventFlux.groupBy(Event::key)
.flatMap(
groupedEvents -> groupedEvents.distinct(Event::type) // distinct to avoid duplicates
.buffer(3) // there are three event types, so we buffer and wait for all to arrive
.flatMap(events -> // once all the events are there we can do the processing the way we need
Mono.just(events.stream()
.sorted(Comparator.comparing(Event::type))
.map(e -> e.key + e.value)
.reduce(String::concat)
.orElse(""))
)
)
.subscribe(System.out::println);
}
// prints values concatenated in order per key:
// - AAABAC
// - BABBBC
See Gist: https://gist.github.com/tarczynskitomek/d9442ea679e3eed64e5a8470217ad96a
There are a few caveats:
If all of the expected events for the given key don't arrive you waste memory buffering - unless you set a timeout
How will you ensure that all the events for a given key go to the same application instance?
How will you recover from failures encountered mid-processing?
Having all this in mind, I would go with a persistent storage - say saving the incoming events in the database, and doing the processing in background - for this you don't need to use Reactor. Most of the time a simple Servlets based Spring app will be far easier to maintain and develop, especially if you have no previous experience with Functional Reactive Programming.
Looking at the provided code I would not try to handle it on Reactor Netty level.
At first, several comments regarding controller implementation because it has multiple issues that violate reactive principles. I would recommend to spend some time learning reactive API but here are some hints
In reactive nothing happens until you subscribe. At the same time calling subscribe explicitly is an anti-pattern and should be avoided until you are creating framework similar to WebFlux.
parallel scheduler should be used to run non-blocking logic until you have some blocking code.
doOn... are so-called side-effect operators and should not be used for constructing reactive flows.
#PostMapping
public Mono<String> create(Event event) {
// do query based on some logic depending on event data
return repo.find(query)
.map(e -> event.setData(event.getData() + e.getData()))
.switchIfEmpty(Mono.just(event))
.flatMap(e -> repo.save(e));
}
Now, processing requests in the predefined sequence could be tricky because of network failures, possible retries, etc. What if you never get Request B or Request C? Should you still persist Request A?
As #ttarczynski mentioned in his comment the best option is to redesign API and send single request.
In case it's not an option you would need to introduce some state to "postpone" request processing and then, depending on consistency semantic, process them as a "batch" when the last request is received or just defer Request C until you get Request A & B.

Spring Webflux: efficiently using Flux and/or Mono stream multiple times (possible?)

I have the method below, where I am calling several ReactiveMongoRepositories in order to receive and process certain documents. Since I am kind of new to Webflux, I am learning as I go.
To my feeling the code below doesn't feel very efficient, as I am opening multiple streams at the same time. This non-blocking way of writing code makes it complicated somehow to get a value from a stream and re-use that value in the cascaded flatmaps down the line.
In the example below I have to call the userRepository twice, since I want the user at the beginning and than later as well. Is there a possibility to do this more efficiently with Webflux?
public Mono<Guideline> addGuideline(Guideline guideline, String keycloakUserId) {
Mono<Guideline> guidelineMono = userRepository.findByKeycloakUserId(keycloakUserId)
.flatMap(user -> {
return teamRepository.findUserInTeams(user.get_id());
}).zipWith(instructionRepository.findById(guideline.getInstructionId()))
.zipWith(userRepository.findByKeycloakUserId(keycloakUserId))
.flatMap(objects -> {
User user = objects.getT2();
Instruction instruction = objects.getT1().getT2();
Team team = objects.getT1().getT1();
if (instruction.getTeamId().equals(team.get_id())) {
guideline.setAddedByUser(user.get_id());
guideline.setTeamId(team.get_id());
guideline.setDateAdded(new Date());
guideline.setGuidelineStatus(GuidelineStatus.ACTIVE);
guideline.setGuidelineSteps(Arrays.asList());
return guidelineRepository.save(guideline);
} else {
return Mono.error(new InstructionDoesntBelongOrExistException("Unable to add, since this Instruction does not belong to you or doesn't exist anymore!"));
}
});
return guidelineMono;
}
i'll post my earlier comment as an answer. If anyone feels like writing the correct code for it then go ahead.
i don't have access to an IDE current so cant write an example but you could start by fetching the instruction from the database.
Keep that Mono<Instruction> then you fetch your User and flatMap the User and fetch the Team from the database. Then you flatMap the team and build a Mono<Tuple> consisting of Mono<Tuple<User, Team>>.
After that you take your 2 Monos and use zipWith with a Combinator function and build a Mono<Tuple<User, Team, Instruction>> that you can flatMap over.
So basically fetch 1 item, then fetch 2 items, then Combinate into 3 items. You can create Tuples using the Tuples.of(...) function.

RunnableGraph to wait for multiple response from source

I am using Akka in Play Controller and performing ask() to a actor by name publish , and internal publish actor performs ask to multiple actors and passes reference of sender. The controller actor needs to wait for response from multiple actors and create a list of response.
Please find the code below. but this code is only waiting for 1 response and latter terminating. Please suggest
// Performs ask to publish actor
Source<Object,NotUsed> inAsk = Source.fromFuture(ask(publishActor,service.getOfferVerifyRequest(request).getPayloadData(),1000));
final Sink<String, CompletionStage<String>> sink = Sink.head();
final Flow<Object, String, NotUsed> f3 = Flow.of(Object.class).map(elem -> {
log.info("Data in Graph is " +elem.toString());
return elem.toString();
});
RunnableGraph<CompletionStage<String>> result = RunnableGraph.fromGraph(
GraphDSL.create(
sink , (builder , out) ->{
final Outlet<Object> source = builder.add(inAsk).out();
builder
.from(source)
.via(builder.add(f3))
.to(out); // to() expects a SinkShape
return ClosedShape.getInstance();
}
));
ActorMaterializer mat = ActorMaterializer.create(aSystem);
CompletionStage<String> fin = result.run(mat);
fin.toCompletableFuture().thenApply(a->{
log.info("Data is "+a);
return true;
});
log.info("COMPLETED CONTROLLER ");
If you have several responses ask won't cut it, that is only for a single request-response where the response ends up in a Future/CompletionStage.
There are a few different strategies to wait for all answers:
One is to create an intermediate actor whose only job is to collect all answers and then when all partial responses has arrived respond to the original requestor, that way you could use ask to get a single aggregate response back.
Another option would be to use Source.actorRef to get an ActorRef that you could use as sender together with tell (and skip using ask). Inside the stream you would then take elements until some criteria is met (time has passed or elements have been seen). You may have to add an operator to mimic the ask response timeout to make sure the stream fails if the actor never responds.
There are some other issues with the code shared, one is creating a materializer on each request, these have a lifecycle and will fill up your heap over time, you should rather get a materializer injected from play.
With the given logic there is no need whatsoever to use the GraphDSL, that is only needed for complex streams with multiple inputs and outputs or cycles. You should be able to compose operators using the Flow API alone (see for example https://doc.akka.io/docs/akka/current/stream/stream-flows-and-basics.html#defining-and-running-streams )

Java8 streams map - check if all map operations succeeded?

I am trying to map one list to another using streams.
Some elements of the original list fail to map. That is, the mapping function may not be able to find an appropriate new value.
I want to know if any of the mappings has failed. Ideally I would also like to stop the processing once a failure happened.
What I am currently doing is:
The mapping function returns null if there's no mapped value
I filter() to remove nulls from the stream
I collect(), and then
I compare the size of the result to the size of the original list.
For example:
List<String> func(List<String> old, Map<String, String> oldToNew)
{
List<String> holger = old.stream()
.map(oldToNew::get)
.filter(Objects::nonNull)
.collect(Collectors.toList);
if (holger.size() < old.size()) {
// ... appropriate error handling code ...
}
else {
return holger;
}
}
This is not very elegant. Also, everything is processed even when the whole thing should fail.
Suggestions for a better way of doing it?
Or maybe I should ditch streams altogether and use good old loops?
There is no best solution because that heavily depends on the use case. E.g. if lookup failures are expected to be unlikely or the error handling implies throwing an exception anyway, just throwing an exception at the first failed lookup within the mapping function might indeed be a good choice. Then, no follow-up code has to care about error conditions.
Another way of handling it might be:
List<String> func(List<String> old, Map<String, String> oldToNew) {
Map<Boolean,List<String>> map=old.stream()
.map(oldToNew::get)
.collect(Collectors.partitioningBy(Objects::nonNull));
List<String> failed=map.get(false);
if(!failed.isEmpty())
throw new IllegalStateException(failed.size()+" lookups failed");
return map.get(true);
}
This can still be considered being optimized for the successful case as it collects a mostly meaningless list containing null values for the failures. But it has the point of being able to tell the number of failures (unlike using a throwing map function).
If a detailed error analysis has a high priority, you may use a solution like this:
List<String> func(List<String> old, Map<String, String> oldToNew) {
Map<Boolean,List<String>> map=old.stream()
.map(s -> new AbstractMap.SimpleImmutableEntry<>(s, oldToNew.get(s)))
.collect(Collectors.partitioningBy(e -> e.getValue()!=null,
Collectors.mapping(e -> Optional.ofNullable(e.getValue()).orElse(e.getKey()),
Collectors.toList())));
List<String> failed=map.get(false);
if(!failed.isEmpty())
throw new IllegalStateException("The following key(s) failed: "+failed);
return map.get(true);
}
It collects two meaningful lists, containing the failed keys for failed lookups and a list of successfully mapped values. Note that both lists could be returned.
You could change your filter to Objects::requireNonNull and catch a NullPointerException outside the stream

how can I prove field-grouping functionality(tuple with same field goes to same task)?

I'm a fresher of Storm, I'm getting started with Storm using the project storm-starter. In this project there is a Topology called WordCountTopology, the key code for building topology is:
builder.setBolt("split", new SplitSentence(), 8).shuffleGrouping("spout");
builder.setBolt("count", new WordCount(), 12).fieldsGrouping("split", new Fields("word"));
and in the implementation of WordCount bolt, the key method execute is:
#Override
public void execute(Tuple tuple, BasicOutputCollector collector) {
String word = tuple.getString(0);
Integer count = counts.get(word);
if (count == null)
count = 0;
count++;
counts.put(word, count);
collector.emit(new Values(word, count));
}
My Question is:
As the functionality of filed-grouping is that: tuples with the same filed word will go to the same task for post processing. Here "task" means thread, how can I prove this functionality? In addition, in my opinion, the logic in method execute is a little awkward. In a single task, the parameter tuple is always the same, but in the execute method it does not reflect this, in other words, the logic dose not use this convenience.
Am I clear? My point is that, the code here in execute is not taking the feature of filed-grouping into account, the code here can also be applied to the situation of shuffle-grouping.
I would like to site few points, it might help clear your doubts
Here "task" means thread
In storm's terminology tasks are NOT threads but they are responsible for processing the actual logic. Each spout or bolt that you implement in your code executes as many tasks across the cluster. So you can define them as an running instance of the components i.e Spouts or Bolts.
There is another entity called Executors which are the thread responsible for running these tasks.It can run one or multiple tasks of the same component. An executor having multiple tasks actually is saying the same component is executed for multiple times by the executor.
Now coming back to your question
the code here in execute is not taking the feature of filed-grouping into account, the code here can also be applied to the situation of shuffle-grouping
In very brief A fields grouping lets you group a stream by a subset of its fields, meaning in order to do a word count, if we filtered the stream by using fieldsGrouping on a field name 'first_name` then it is expected that all the first_name field with a value say (Foo) should go to the same task, and the same field with a different value (Bar) goes to another task.
So here the execute method is supposed to receive the same field value and thus can easily update its counter and to do that it does not require to consider anything special. The whole logic is written keeping in mind that the bolt will be chained with the proper data and that's why using the proper grouping become such an important thing. So if you use shuffleGrouping then same code will run but produces incorrect data.
Well Pinky (or anyone else who finds this useful), to prove it, you just have to keep track of the bolt or spout task ID:
#Override
public void prepare(Map map, TopologyContext tc, OutputCollector oc) {
this.boltId = tc.getThisTaskId();
}
Now in the execute() of the same fieldsGrouped bolt that receives the tuples, you just print the id and the tuple:
#Override
public void execute(Tuple tuple) {
String myWord = (String) tuple.getValue(0);
System.out.println("word: "+myWord+" boltID:"+boltId);
}

Resources