If you populate the context in a WebFilter, how is it available in the controllers. To my knowledge, the context is available upstream at subscription time, but the WebFilters come before the controllers in the pipeline, so technically the controllers shouldn't have access to the context since it is downstream of the Filter in which the context was populated.
Any explanation and/ or references to useful documentation would be appreciated.
This quote is taken straight off the reactor documentation:
the Context is associated to the Subscriber and each operator accesses the Context by requesting it from its downstream Subscriber.
so what does this really mean. Well it means that when someone subscribes the context is created.
We then go into assembly time which means that each operator will then call upstream until it finds a producer that will produce data. When this is found data will start flowing, and each operator will access the context by asking for it from the operator after it.
That's why we declare the context at the end of the chain, because the operators ask for it downstream, to get sent upstream.
There is a very long and length explanation about how the context is propagated in the docs, and its too long to write as an answer here. But i suggest you read it and try out their examples.
So data flows downstream, context flows upstream.
Context, reactor documentation
Related
I am attempting to accomplish something along these lines with Quarkus, and Naryana:
client calls service to start a process that takes a while: /lra/start
This call sets off an LRA, and returns an LRA id used to track the status of the action
client can keep polling some endpoint to determine status
service eventually finishes and marks the action done through the coordinator
client sees that the action has completed, is given the result or makes another request to get that result
Is this a valid use case? Am I visualizing the correct way this tool can work? Based on how the linked guide reads, it seems that the endpoints are more of a passthrough to the coordinator, notifying it that we start and end an LRA. Is there a more programmatic way to interact with the coordinator?
Yes, it might be a valid use case, but in every case please read the MicroProfile LRA specification - https://github.com/eclipse/microprofile-lra.
The idea you describe is more or less one LRA participant executing in a new LRA and polling the status of this execution. This is not totally what the LRA is intended for, but surely can be used this way.
The main idea of LRA is the composition of distributed transactions based on the saga pattern. Basically, the point is to coordinate multiple services to achieve consistent results with an eventual consistency guarantee. So you see that the main benefit arises when you can propagate LRA through different services that either all complete their actions or all of their compensation callbacks will be called in case of failures (and, of course, only for the services that executed their actions in the first place). Here is also an example with the LRA propagation https://github.com/xstefank/quarkus-lra-trip-example.
EDIT: Sorry, I forgot to add the programmatic API that allows same interactions as annotations - https://github.com/jbosstm/narayana/blob/master/rts/lra/client/src/main/java/io/narayana/lra/client/NarayanaLRAClient.java. However, note that is not in the specification and is only specific to Narayana.
Up until now I have been handling authorization in the CommandHandlers.
An example is I have an aggregate "Team" containing a list of managers (AggregateIdentifier from a User). All command handlers in the Team aggregate then verify the user executing the command is manager of the team.
The userId is injected as metadata in a CommandHandlerInterceptor based on the SecurityContext.
My main concern is, when I use sagas, it becomes an additional overhead to maintain the user context across the commands issued against different aggregates. Aside from that, the manager association can expire in the period the saga is running and subsequent failing commands, leading to an incomplete state which also needs to be handled with some rollback functionality.
Is it better to do the authorization in my controller layer to avoid the additional overhead or should I see it more as good practice to let my CommandHandlers decide whether the command is valid for the aggregate?
Authorization to perform certain operations/commands is something which I'd argue isn't domain specific logic. Instead, it is more a form of cross cutting concern which you need throughout your application. Thus, placing it in the #CommandHandler annotated method is not the ideal place in my head. However, placing it close by makes a lot of sense.
You have pointed out you are already using a CommandHandlerInterceptor to populate the Spring SecurityContext, thus I am assuming you are using a CommandDispatchInterceptor to populate the command's MetaData with information when you send a command out. This is a great use of the interceptor logic indeed, so I'd keep that in place. This however set's the information, it doesn't validate it.
To that end, you could build your own Handler Enhancer, which validates security metadata on a command. You could even build a dedicated annotation you'd add next to the #CommandHandler annotation, which describes the required roles. That way, the method still portrays what roles you need for the given command, but the actual validation can be done in this Handler Enhancer for you.
Now, let's circle back to your question:
Is it better to do the authorization in my controller layer to avoid the additional overhead or should I see it more as good practice to let my CommandHandlers decide whether the command is valid for the aggregate?
I think it's fine to do it in the aggregate, potentially making it cleaner through use of a Handler Enhancer. When it comes to your concern in the Saga, well, I think you should see that separate. The Saga handles events, facts that something has happened. Ignoring that fact because somebody whom initiated the operations which led to this fact doesn't have the rights doesn't resolve the point that it still has happened. Added, you are indeed not guaranteed on the timing of the Saga at all. Maybe your Saga deals with historical events, meaning it is completely out of scope.
If possible within your system, I would regard any command the Saga wants to publish as being sent by a "system user". The Saga is not something your users (which have specific roles) will directly influence; it is all indirect. The Saga is internal to your system, hence it is the system describing the intent to perform an operation.
That's my two cents to the situation, hope this helps you out #Vincent!
I'm dealing with Spring Integration Events and need to ensure proper order of event calls. I have two listeners. One is called TerminalErrorListener and catches TcpConnectionExceptionEvent and TcpDeserializationExceptionEvent. Second is called TerminalDisconnectEventListener and catches TcpConnectionCloseEvent.
In my case I use NIO and manually extended TcpNioConnection with my class which contains one extra field. This field is called Originator and carries information about what caused TcpConnectionCloseEvent and here comes my question.
I define the originator inside TerminalErrorListener and need to ensure that TerminalDisconnectEventListener is called after the TerminalErrorListener.
How can I generally ensure (probably I can guarantee that close event is called after the error) that this will happen? Is there any priority mode or default flow model which can be seen in some kind of diagram? I mean when are specific events called and what is the general sequence of all events.
Thanks for answer.
With NIO, there is no guarantee that you will get the deserialization failure event before the connection close event.
I have a workflow whose message payload (MasterObj) is being enriched several times. During the 2nd enrichment () an UnknownHostException was thrown by an outbound gateway. My error channel on the enricher is called but the message the error-channel receives is an exception, and the failed msg in that exception is no longer my MasterObj (original payload) but it is now the object gotten from request-payload-expression on the enricher.
The enricher calls an outbound-gateway and business-wise this is optional. I just want to continue my workflow with the payload that I've been enriching. The docs say that the error-channel on the enricher can be used to provide an alternate object (to what the enricher's request-channel would return) but even when I return an object from the enricher's error-channel, it still takes me to the workflow's overall error channel.
How do I trap errors from enricher's + outbound-gateways, and continue processing my workflow with the same payload I've been working on?
Is trying to maintain a single payload object for the entire workflow the right strategy? I need to be able to access it whenever I need.
I was thinking of using a bean scoped to the session where I store the payload but that seems to defeat the purpose of SI, no?
Thanks.
Well, if you worry about your MasterObj in the error-channel flow, don't use that request-payload-expression and let the original payload go to the enricher's sub-flow.
You always can use in that flow a simple <transformer expression="">.
On the other hand, you're right: it isn't good strategy to support single object through the flow. You carry messages via channel and it isn't good to be tied on each step. The Spring Integration purpose is to be able to switch from different MessageChannel types at any time with small effort for their producers and consumers. Also you can switch to the distributed mode when consumers and producers are on different machines.
If you still need to enrich the same object several times, consider to write some custom Java code. You can use a #MessagingGateway on the matter to still have a Spring Integration gain.
And right, scope is not good for integration flow, because you can simply switch there to a different channel type and lose a ThreadLocal context.
What is the exact nature of the thread-unsafety of a JMS Session and its associated constructs (Message, Consumer, Producer, etc)? Is it just that access to them must be serialized, or is it that access is restricted to the creating thread only?
Or is it a hybrid case where creation can be distinguished from use, i.e. one thread can create them only and then another thread can be the only one to use them? This last possibility would seem to contradict the statement in this answer which says "In fact you must not use it from two different threads at different times either!"
But consider the "Server Side" example code from the ActiveMQ documentation.
The Server class has data members named session (of type Session) and replyProducer (of type MessageProducer) which are
created in one thread: whichever one invokes the Server() constructor and thereby invokes the setupMessageQueueConsumer() method with the actual creation calls; and
used in another thread: whichever one invokes the onMessage() asynchronous callback.
(In fact, the session member is used in both threads too: in one to create the replyProducer member, and in the other to create a message.)
Is this official example code working by accident or by design? Is it really possible to create such objects in one thread and then arrange for another thread to use them?
(Note: in other messaging infrastructures, such as Solace, it's possible to specify the thread on which callbacks occur, which could be exploited to get around this "thread affinity of objects" restriction, but no such API call is defined in JMS, as far as I know.)
JMS specification says a session object should not be used across threads except when calling Session.Close() method. Technically speaking if access to Session object or it's children (producer, consumer etc) is serialized then Session or it's child objects can be accessed across threads. Having said that, since JMS is an API specification, it's implementation differs from vendor to vendor. Some vendors might strictly enforce the thread affinity while some may not. So it's always better to stick to JMS specification and write code accordingly.
The official answer appears to be a footnote to section 4.4. "Session" on p.60 in the JMS 1.1 specification.
There are no restrictions on the number of threads that can use a Session object or those it creates. The restriction is that the resources of a Session should not be used concurrently by multiple threads. It is up to the user to insure that this concurrency restriction is met. The simplest way to do this is to use one thread. In the case of asynchronous delivery, use one thread for setup in stopped mode and then start asynchronous delivery. In more complex cases the user must provide explicit synchronization.
Whether a particular implementation abides by this is another matter, of course. In the case of the ActiveMQ example, the code is conforming because all inbound message handling is through a single asynchronous callback.