Spring State Machine: possible events - spring

is it possible to get the list of possible events from a current state?
StateMachine<State, Event> stateMachine = stateMachineService.acquireStateMachine(machineId);
stateMachine.sendEvent(event);
stateMachine.getState() // get possible events from State

The only solution i found is:
stateMachine.getTransitions().stream()
.filter(transition -> transition.getSource().getId().equals(stateMachine.getState().getId()))
.map(transition -> transition.getTrigger().getEvent())
.collect(Collectors.toList());

Related

kafka streams in runtime change in/out topics

How to add an incoming topic and change an outgoing topic while the application is running? Depending on which incoming topic is currently being worked with, the outgoing topic should change.
in_topic1 -> filter OK -> out_topic1;
in_topic2 -> filter OK -> out_topic2.
final Serde<byte[]> byteArraySerde = Serdes.ByteArray();
final Serde<String> stringSerde = Serdes.String();
final StreamsBuilder builder = new StreamsBuilder();
final KStream<byte[], String> textLines = builder
.stream(prop.getProperty("kafka.topic.in"), Consumed.with(byteArraySerde, stringSerde));
final KStream<byte[], String> processed = textLines
.filter(MetaModelProcessor.filter())
.mapValues(MetaModelProcessor.getMetaModel());
processed.to(prop.getProperty("kafka.topic.out"));
final org.apache.kafka.streams.KafkaStreams streams = new org.apache.kafka.streams.KafkaStreams(builder.build(), new KafkaStreamsConfig(prop.getProperty("kafka.app.id.config"), prop.getProperty("kafka.client.id.config"), prop.getProperty("kafka.server")).getStreamsConfiguration());
streams.cleanUp();
streams.start();
Runtime.getRuntime().addShutdownHook(new Thread(streams::close));
A Kafka stream application is basically a wrapper over a Producer and Consumer with higher order transformation functions. When you create a Streams application, you are initializing a topology that interacts with the broker. Adding dynamic ingress and egress topics is not a trivial operation.
What would happen to the intermediately processed results of a v1 topology that consumed a message from topic I1 and was just about to write to topic T1 when a dynamic event switches the output topic to T2. Worse, what if there's a state store being maintained.
This seems to be a weird requirement. If you are finding yourself in this place, it probably means we need to revisit the use case and the design thoroughly.

Reactive Redis does not continually publish changes to the Flux

I am trying to get live updates on my redis ordered list without success.
It seems like it fetches all the items and just ends on the last item.
I would like the client to keep get updates upon a new order in my ordered list.
What am I missing?
This is my code:
#RestController
class LiveOrderController {
#Autowired
lateinit var redisOperations: ReactiveRedisOperations<String, LiveOrder>
#GetMapping(produces = [MediaType.TEXT_EVENT_STREAM_VALUE], value = "/orders")
fun getLiveOrders(): Flux<LiveOrder> {
val zops = redisOperations?.opsForZSet()
return zops?.rangeByScore("orders", Range.unbounded())
}
}
There is no such feature in Redis. First, reactive retrieval of a sorted set is just getting a snapshot, but your calls are going in a reactive fashion. So you need a subscription instead.
If you opt in for keyspace notifications like this (K - enable keyspace notifications, z - include zset commands) :
config set notify-keyspace-events Kz
And subscribe to them in your service like this:
ReactiveRedisMessageListenerContainer reactiveRedisMessages;
// ...
reactiveRedisMessages.receive(new PatternTopic("__keyspace#0__:orders"))
.map(m -> {
System.out.println(m);
return m;
})
<further processing>
You would see messages like this: PatternMessage{channel=__keyspace#0__:orders, pattern=__keyspace#0__:orders, message=zadd}. It will notify you that something has been added. And you can react on this somehow - get the full set again, or only some part (head/tail). You might even remember the previous set, get the new one and send the diff.
But what I would really suggest is rearchitecting the flow in some way to use Redis Pub/Sub functionality directly. For example: publisher service instead of directly calling zadd will call eval, which will issue 2 commands: zadd orders 1 x and publish orders "1:x" (any custom message you want, maybe JSON).
Then in your code you will subscribe to your custom topic like this:
return reactiveRedisMessages.receive(new PatternTopic("orders"))
.map(LiveOrder::fromNotification);

spring statemachine set multiple initial state

i have a serial state of order like
public enum orderStateEnum {
STATE_UNUSED("UNUSED"),
STATE_ORDERED("ORDERED"),
STATE_ASSIGNED("ASSIGNED"),
STATE_ASSIGN_EXCEPTION("ASSIGN_EXCEPTION"),
STATE_PACKED("PACKED"),
//and so on
}
  and i want to use spring.statemachine(or other state machine implementation) to manage the transition like from STATE_UNUSED to STATE_ORDERED STATE_ORDERED to STATE_ASSIGNED STATE_ORDERED to STATE_ASSIGN_EXCEPTION STATE_ASSIGNED to STATE_PACKED   however all the order data is stored in database,so in my case, if i have an order with STATE_ASSIGNED state, i fetch the order state from the database,but in spring.statemachine, i have to ``` StateMachine stateMachine = new StateMachine(); stateMachine.createEvent(Event_take_order);
  when i new a instance of stateMachine, it's inital state is STATE_UNUSED,however i want the inital state to be the state i fetch from the database which is STATE_ASSIGNED,how can i achieve that? i've read [https://docs.spring.io/spring-statemachine/docs/1.0.0.BUILD-SNAPSHOT/reference/htmlsingle/] but i can't find any solution in it.
When you create a new StateMachine you can get StateMachineAccessor using stateMachine.getStateMachineAccessor()
StateMachineAccessor is:-
Functional interface for StateMachine to allow more programmaticaccess to underlying functionality. Functions prefixed "doWith" will expose StateMachineAccess via StateMachineFunction for better functionalaccess with jdk7. Functions prefixed "with" is better suitable for lambdas.(From Java Docs)
StateMachineAccessor has a method called doWithAllRegions where you can provide implementation of StateMachineFunction (interface) and doWithAllRegions will execute given StateMachineFunction with all recursive regions.
So, to achieve what you are trying to do the code will look like this:-
StateMachine<orderStateEnum, Events> stateMachine = smFactory.getStateMachine();
stateMachine.getStateMachineAccessor().doWithAllRegions(access -> access
.resetStateMachine(new DefaultStateMachineContext<>(STATE_ASSIGNED, null, null, null)));
I have provided the implementation of the interfaces using lambdas.

How to send record on topic when window is closed in kafka streams

So i have been struggeling with this for a couple of days, acctually. I am consuming records from 4 topics. I need to aggregate the records over a TimedWindow. When the time is up, i want to send either an approved message or a not approved message to a sink topic. Is this possible to do with kafka streams?
It seems it sinks every record to the new topic, even though the window is still open, and that's really not what i want.
Here is the simple code:
builder.stream(getTopicList(), Consumed.with(Serdes.ByteArray(),
Serdes.ByteArray()))
.flatMap(new ExceptionSafeKeyValueMapper<String,
FooTriggerMessage>("", Serdes.String(),
fooTriggerSerde))
.filter((key, value) -> value.getTriggerEventId() != null)
.groupBy((key, value) -> value.getTriggerEventId().toString(),
Serialized.with(Serdes.String(), fooTriggerSerde))
.windowedBy(TimeWindows.of(TimeUnit.SECONDS.toMillis(30))
.advanceBy(TimeUnit.SECONDS.toMillis(30)))
.aggregate(() -> new BarApprovalMessage(), /* initializer */
(key, value, aggValue) -> getApproval(key, value, aggValue),/*adder*/
Materialized
.<String, BarApprovalMessage, WindowStore<Bytes, byte[]>>as(
storeName) /* state store name */
.withValueSerde(barApprovalSerde))
.toStream().to(appProperties.getBarApprovalEngineOutgoing(),
Produced.with(windowedSerde, barApprovalSerde));
As of now, every record is being sinked to the outgoingTopic, i only want it to send one message when the window is closed, so to speak.
Is this possible?
I answering my own question, if anyone else needs an answer. In the transform stage, I used the context to create a scheduler. This scheduler takes three parameters. What interval to punctuate, which time to use(wall clock or stream time) and a supplier(method to be called when time is met). I used wall clock time and started a new scheduler for each unique window key. I add each message in a KeyValue store and return null. Then, In the method that is called every 30 seconds, I check that the window is closed, and iterate over the messages in the keystore, aggregates and use context.forward and context.commit. Viola! 4 messages received in a 30 seconds window, one message produced.
You can use the Suppress functionality.
From Kafka official guide:
https://kafka.apache.org/21/documentation/streams/developer-guide/dsl-api.html#window-final-results
I faced the issue, but I solve this problem to add grace(0) after the fixed window and using Suppressed API
public void process(KStream<SensorKeyDTO, SensorDataDTO> stream) {
buildAggregateMetricsBySensor(stream)
.to(outputTopic, Produced.with(String(), new SensorAggregateMetricsSerde()));
}
private KStream<String, SensorAggregateMetricsDTO> buildAggregateMetricsBySensor(KStream<SensorKeyDTO, SensorDataDTO> stream) {
return stream
.map((key, val) -> new KeyValue<>(val.getId(), val))
.groupByKey(Grouped.with(String(), new SensorDataSerde()))
.windowedBy(TimeWindows.of(Duration.ofMinutes(WINDOW_SIZE_IN_MINUTES)).grace(Duration.ofMillis(0)))
.aggregate(SensorAggregateMetricsDTO::new,
(String k, SensorDataDTO v, SensorAggregateMetricsDTO va) -> aggregateData(v, va),
buildWindowPersistentStore())
.suppress(Suppressed.untilWindowCloses(unbounded()))
.toStream()
.map((key, value) -> KeyValue.pair(key.key(), value));
}
private Materialized<String, SensorAggregateMetricsDTO, WindowStore<Bytes, byte[]>> buildWindowPersistentStore() {
return Materialized
.<String, SensorAggregateMetricsDTO, WindowStore<Bytes, byte[]>>as(WINDOW_STORE_NAME)
.withKeySerde(String())
.withValueSerde(new SensorAggregateMetricsSerde());
}
Here you can see the result

Omnet++, How can I get list of all scheduled events of a module?

I am scheduling list of events against a node in omnet++ using:
scheduleAt(simTime().dbl() + slotTime, msg)
and there could be multiple such schedule in future event list for a single module.
Now at a given time instant I want to cancel all future scheduled events of a node and that's why I need list of all future events.
To the best of my knowledge cancelEvent(msg) only cancel one event. How can I find the list and remove all events. Please help.
To access a list of all future events one can use getMessageQueue(). And to remove only own events (i.e. selfmessages) every event in that list has to be checked using isSelfMessage(). The sample code, which removes all selfmessages from future event set:
cMessageHeap& heap = cSimulation::getActiveSimulation()->getMessageQueue();
cMessageHeap::Iterator it(heap);
do {
cMessage * event = it();
if (event && event->isSelfMessage()) {
cancelAndDelete(event);
it.init(heap);
} else {
it++;
}
} while (!it.end());

Resources