I am exploring IWorkflowLaunchpad and understand it provides a service to query query workflows and somehow invoke them...
It noticed two variants (aside from searching) :
Dispatchers --> (example: DispatchPendingWorkflowAsync)
Executors --> (example: ExecutePendingWorkflowsAsync)
They have similar signatures... can someone explain in basic terms the difference ?
Related
When it comes to implementing event sourcing based microservices, one of the main concerns that we've come across is aggregating data for responses. For an example we may have two entities like school and student. One microservice may be responsible for handling school related business logic while another may handle students.
Now if someone makes a query through a REST endpoint and ask for a particular student and they might expect both school and student details, then the only known ways for me are the following.
Use something like service chaining. An example would be an Api-Gateway aggregating a response after making couple of requests to couple of microservices.
Having everything replicated throughout all services. Essentially, data would be duplicated.
Having services calling each other for those extra bit of information. This solution works but hard to scale and goes against basic idea of using event sourcing.
My question is that what other ways are there to do this ?
A better approach can be to create a separate reporting/search service, that aggregates the data from both services. For example implemented using ElasticSearch or SOLR.This now allows the users to do search and queries across multiple services and aggregates.
Sure, it will be eventually consistent, but I doubt that is s a problem. This gives a better separation of concerns and you get a nice search experience for your users at the same time.
If I need to mutate multiple variables as part of a transaction, what is the best approach?
As far as I understand, grapQL will issue individual mutation command sequently behind the scene. What I need to do to make it transactional?
Thanks!
Is this a single mutation that performs multiple DB operations? If so, your mutation resolver is expected to perform all the operations, so it can manage the transaction itself: open, perform multiple operations, close.
If you're performing multiple mutations in a single request, they're always executed serially. In this case, you can simply open the transaction when you start GraphQL processing (e.g. in the controller or servlet or an instrumentation) and close it once you receive the result (or an exception). So the transaction management is around GraphQL execution, but outside of it.
If you want only certain operations from the request to be executed transactionally but not the others, you're already in the territory best left to frameworks. Use something like Spring's transaction management and let that deal with the correct lifecycle.
If you're, on the other hand, trying to implement a transaction spanning multiple distinct requests, this is really not something you should be doing, for many reasons, extremely convoluted transaction management being only one.
I'd like to use Spring SM in my next future that has very simple workflows, 3-4 states, rule based transitions, and max actors.
The WF is pretty fixed, so storing its definition in java config is quite ok.
I'd prefer to use SM than WF engine which comes with the whole machinery, but I couldnt find out if there is a notion of Actor.
Meaning, only one particular user (determined by login string) can trigger a transition between states.
Also, can I run the same State machine definition in parallel. Is there a notion of instance, like process instance in WF jargon?
Thanks,
Milan
Actor with a security is an interesting concept but we don't have anything build in right now. I'd say that this can be accomplished via Spring Security i.e. https://spring.io/blog/2013/07/04/spring-security-java-config-preview-method-security/ and there's more in its reference doc.
I could try to think if there's something what we could do to make this easier with Spring Security.
Parallel machines are on my todo list. It is a big topic so takes while to implement. Follow https://github.com/spring-projects/spring-statemachine/issues/35 and other related tickets. That issue is a foundation of making distributed state machines.
I'm looking for an easy way on some of my flows to be able to log when some "event" occurs.
In my simple case an "event" might be whenever any message flows down a channel, or whenever a certain # of messages flow down a channel, I'd like to print out some info to a log file.
I know there currently is a logging-channel-adapter but in the case just described I'd need to be able to tailor my own log message and I'd also need to have some sort of counter or metrics keeping track of things (so the expression on the adapter wouldn't suffice since that grants access to the payload but not info about the channel or flow).
I'm aware that Spring Integration already exposes a lot of metrics to JMX via ManagedResources and MetricType and ManagedMetrics.
I've also watched Russell's "Managing and Monitoring Spring Integration Applications" YouTube video several times: https://www.youtube.com/watch?v=TetfR7ULnA8
and I realize that Spring Integration component metrics can be polled via jmx-attribute-polling-channel-adapter
There are certainly many ways to get what I'm after.
A few examples:
ServiceAdapter that has a counter in it that also has a reference to a logger
Hook into the advice-chain of a poller
Poll JMX via jmx-attribute-polling-channel-adapter
It might be useful however to offer a few components that users could put into the middle of a flow that could provide some basic functionality to easily satisfy the use-case I described.
Sample flow might look like:
inbound-channel-adapter -> metric-logging-channel-interceptor -> componentY -> outbound-channel-adapter
Very high-level such a component might look like a hybrid of the logging-channel-adapter and a ChannelInterceptor with a few additional fields:
<int:metric-logging-channel-interceptor>
id=""
order=""
phase=""
auto-startup=""
ref=""
method=""
channel=""
outchannel=""
log-trigger-expression="(SendCount % 10) = 0"
level=""
logger-name=""
log-full-message=""
expression=""
/>
Internally the class implementing that would need to keep a few basic stats, I think the ones exposed on messageChannel would be a good (i.e. SendCount, MaxSendDuration, etc).
The log-trigger-expression and expression attributes would need access to the internal counters as well.
Please let me know if there is something that already does what I'm describing or if I'm overcomplicating this. If it does not currently exist though I think that being able to quickly drop a component into a flow without having to write a custom ServiceActivator just for logging purposes provides benefit.
Interesting question. You can already do something similar with a selective wire-tap...
<si:publish-subscribe-channel id="seconds">
<si:interceptors>
<si:wire-tap channel="thresholdLogger" selector="selector" />
</si:interceptors>
</si:publish-subscribe-channel>
<bean id="selector" class="org.springframework.integration.filter.ExpressionEvaluatingSelector">
<constructor-arg
value="#mbeanServer.getAttribute('org.springframework.integration:type=MessageChannel,name=seconds', 'SendCount') > 5" />
</bean>
<si:logging-channel-adapter id="thresholdLogger" />
There are a couple of things going on here...
The stats are actually held in the MBean for the channel, not the channel itself so the expression has to get the value via the MBean server.
Right now, the wire-tap doesn't support selector-expression, just selector so I had to use a reference to an expression evaluating selector. It would be a useful improvement to support selector-expression directly.
Even though the selector in this example acts on the stats for the tapped channel, it can actually reference any MBean.
I can see some potential improvements here.
Support selector-expression.
Maintain the stats in the channel itself instead of the MBean so we can just use #channelName.sendCount > 5.
Feel free to open JIRA 'improvement' issue(s).
Hope that helps.
Is there any reason, other than semantics, to create different dispatch methods for view and server actions? All tutorials and examples I’ve seen (most notably this) ignore the source constant entirely when listening to dispatched payloads in favor of switching on the payload's action type.
I suppose there is a reason why this pattern is pervasive in flux examples, but I have yet to see a concrete example as to why this is useful. Presumably one could add an additional if or switch on the payload source to determine whether to act in stores, but no examples I've seen consider this constant at all. Any thoughts on this would be much appreciated.
Yes, this was cruft/cargo-culting that came over from a particular Flux project at Facebook, but there is no real reason to do this. If you do need to differentiate between server and view actions, you can just give them different types, or have another property of the action itself to help differentiate them.
When I get time, I plan to rewrite all the examples and documentation to reflect this.