How do I pass a user id into an Akka system? - spring

We have a small service using Akka internally to run actions concurrent and in a non-blocking way. Akka is of great value to us, but now I need to add authorization (filtering for search and access restriction on specific objects) to some of the actors. We're using Spring Boot and Spring Security to handle authentication and authorization on the Web container, before transferring most of the work to actors managed by the Akka system. Naively, I thought I could use
String userId = SecurityContextHolder.getContext().getAuthentication().getName();
in the actors, but that of course fails because we are on Akka controlled threads. By default, I don't have control over the threads of my Akka system, so there's no thread local security context available.
I see two options:
Pass the user id as part of the messages
Pass the Spring DelegatingSecurityContextExecutorService to the Akka system via a ExecutionContext
On 1: I can pass the user id with the messages to make it available to the respective actors. However, when the authenticated user id becomes part of a message, how can I protect my actors from getting messages that contain any kind of user id? Is this bad practice or is a Akka system protected against such misuse? Do I need to activate further security on the Akka system?
On 2: I can pass my own ExecutionContext when creating the Akka system. That allows me to pass an instance of Spring's DelegatingSecurityContextExecutorService into it, in the hope that the thread local SecurityContext is being passed along acordingly. Has anybody successfully done that? One issue that comes to my mind here is: What happens when an actor is actually running on a remote machine? The threads there won't have the SecurityContext for sure.
So what's your take on this? Do you see any other options available? Many thanks in advance!

Related

Do NestJS providers need to be stateless?

I'm a long-time Spring developer learning NestJS. The similarities are so striking, and I've loved how productive that's allowed me to be. Some documentation has me confused about one thing however.
I try to liken Nest "providers" to Spring beans with default scope. For example I create #Injectable service classes and think of them as analogous to Spring #Services. As such I've assumed these service classes needed to be thread safe - no state, etc. However, the Nest documentation here is a little ambiguous to me and kind of implies this might not be necessary (emphasis mine):
For people coming from different programming language backgrounds, it might be unexpected to learn that in Nest, almost everything is shared across incoming requests. We have a connection pool to the database, singleton services with global state, etc. Remember that Node.js doesn't follow the request/response Multi-Threaded Stateless Model in which every request is processed by a separate thread. Hence, using singleton instances is fully safe for our applications.
If individual requests aren't handled in their own threads, is it OK for Nest providers to contain mutable state? It would be up to the app to ensure each incoming request started with a "clean slate" - e.g. initializing that state with a NestInterceptor, for example. But to me, that doc reads that providers are created as singletons, and thus can be used as something akin to a wrapper container for data, like a ThreadLocal in Java.
Am I reading this wrong, or is this a difference in behavior between Nest and Spring?
You really should make request handling stateless.
I don't know anything about Spring, but in NestJS (and async javascript in general) it's single threaded, but doesn't block for I/O. That means the same thread of the same instance of a service can process multiple requests at once. It can only do one thing at a time, but it can start doing the next thing while the previous thing is waiting on a database query, or for the request to finish being transmitted, or for an external service to respond, or for the filesystem to deliver the contents of a file, etc.
So in one thread, with one instance of a service, this can happen:
Request A comes in.
Database query is dispatched for request A.
Request B comes in.
Database query is dispatched for request B.
Database query for request A returns, and the response is sent.
Database query for request B returns, and the response is sent.
What that means for state is that it will be shared between requests. If your service sets an instance property at one step of an async operation, then another async operation may start before the first was complete and set a new value for that instance property, which is probably not what you want.
I believe the "global state" the Nest docs mention is not per request, but general configuration state. Like the URL of an external service, or credentials to your database.
It's also worth mentioning that controllers receive a request object, which represents that specific request. It's common to add properties to that request object, like the current authenticated user for example. The request object can be passed around to give your controller and services context in a way that is friendly to this architecture.

Saga Pattern on hardware failure and inter services communication

I am building a Spring Boot microservice application. I am planning on adopting the Saga pattern to tackle the distributed transaction problem. Below is the list of questions and problems that I am facing.
Here is the context for ease of explanation.
Client -> Service A -> Service B
Handling of non-alive microservices due to failure
Assuming that Service B is not alive due to hardware / software failure, how should A react?
Async communication
It is recommended that we have async communication for saga pattern. Assuming that time for client -> A < A -> B, how does the Client receive the data that A receives from B at a later time? Is it that A has to return an Async object back to client? Something like CompletableFuture class?
Service requesting resources from other services.
Assuming that Service A has to request some resources from Service B, how should A go about doing this? All I can think of is using HTTP / gRPC (eliminated communication from message broker).
If you happened to have some experience / advice, please share :)
Any help or advice on Saga pattern is appreciated!
SAGA is used for distributed transaction. It can be implemented by using Orchestration or Choreography based. It is mostly (prefer) implemented by using async way of communication. Message Broker plays important role here.
There are lots of queries. Let me try to answer those.
If one service is down - You can setup a monitoring system for SAGA. In case, if any service is down or SAGA is not processed for some threshold time then you can raise alert.
Async Communication - It is mostly used to process some commands (not query). Whenever client call service A, it initiate the SAGA and reply back with current status. It also return a id (you can say job id). Now there are 2 ways through which Client get updated status. One is Poll (where client ask for status update after N sec) and 2nd is Push (where server push the changes when there is change in state.)
Service request resource from other - Yeah, prefer way is REST or gRPC. Also, if data is type of constant then you can use cache.
Suggestion - SRE (Monitoring etc.) play an important role in Microservice architecture. So, if you have setup that well then you can easily handle other challenges of microservice.

Data sharing with microservices

I am implementing an event-driven microservice architecture. Imagine the following scenario:
Chat service: Ability to see conversations and send messages. Conversations can have multiple participants.
Registration-login service: Deals with the registration of new users, and login.
User service: Getting/updating user profiles.
The registration-login service emits the following event with the newly created user object:
registration-new
login-success
logout-success
The chat service then listens on registration-new and stores some fields of user in its own redis cache. It also listens on login-success and stores the token, and on logout-success to delete the token.
The user service has the following event: user-updated. When this is fired, a listener in the chat service updates the data corresponding to the user id in redis. Like the chat service, the user service also listens on login-success and logout-success and does the same thing as what the chat service does.
My question is the following: is this a good way to do this? It feels a bit counterintuitive to be sharing data everywhere. I need some advice on this. Thank you!
Seems that there's no other way. Microservices architecture puts lots of stress in avoiding data sharing so as to not create dependencies. That means that each microservice will have some data duplicated. That also means that there must exist a way of getting data from other contexts. The preferred methods strive for eventual consistency, such as sending messages to event sourcing or AMQP systems and subscribing to them. You can also use synchronous methods (RPC calls, distributed transactions). That creates additional technologic dependencies, but if you cannot accept eventual consistency it could be the only way.

Queue an async web request in Spring with credentials

I'm relatively new to Spring, and trying to queue up a set of web reqeusts on the server (in order to warm memcached). It's unclear to me how I can transfer on the current request's credentials to be used in the future web request I'm putting in the queue. I've seen a handful of scheduling solutions (TaskExecutor, ApplicationEventMultitasker, etc) but was unclear if/how they handle credentials, as that seems to be the most complicated portion of this task.
It's not possible directly. Security credentials are stored in ThreadLocal which means once the request is forwarded to another thread, credentials are lost. All you can do (which might actually be beneficial to your design) is to pass credentials directly, by wrapping them inside Callable/Runnable or whatever mechanism you use.

NetTcpBinding with Streaming and Session

I’m trying to set up a WcfService with the use of NetTcpBinding. I use Transfer mode Streamed since I will transfer large files. I need to use Session, and I have read that NetTcpBinding supports this, but when I turn it on like:
SessionMode=SessionMode.Required
I get the error:
System.InvalidOperationException: Contract requires Session, but Binding 'NetTcpBinding' doesn't support it or isn't configured properly to support it.
Does anyone know what I have to do to make NetTcpBinding work with sessions?
Thanks for any help :)
You've no doubt solved this - but for others that come across it (as I did)...
According to "Programming WCF Services" by Juval Lowy - you can't use streaming with a contract that is configured SessionMode.Required. See page 243
Neither can you use NetTcpBinding with reliable messaging with streaming.
It doesn't elaborate as to why.
One workaround might be to split the operations that require session mode into a separate contract and the streaming operations into another.
Then implement a unique ID for each client (unique GUID for the lifetime of the client app) which is passed in the non-streaming interface as a RegisterSession(Guid mySessionId) operation.
When sessions are created on the server - they can register with a session manager object which stores the GUID, SessionContractImplemenation pair in a Dictionary.
Then add a param to the streaming contract operation (same GUID) so that the streaming contract implementation can access the live non-streaming object (via the session manager you created - using the GUID provided).
You'll have to manage session lifetimes appropriately of course.
From Microsoft...
Sessions and Streaming
When you have a large amount of data to transfer, the streaming transfer mode in WCF is a feasible alternative to the default behavior of buffering and processing messages in memory in their entirety. You may get unexpected behavior when streaming calls with a session-based binding. All streaming calls are made through a single channel (the datagram channel) that does not support sessions even if the binding being used is configured to use sessions. If multiple clients make streaming calls to the same service object over a session-based binding, and the service object's concurrency mode is set to single and its instance context mode is set to PerSession, all calls must go through the datagram channel and so only one call is processed at a time. One or more clients may then time out. You can work around this issue by either setting the service object's InstanceContextMode to PerCall or Concurrency to multiple.
Note:
MaxConcurrentSessions have no effect in this case because there is only one "session" available.
See http://msdn.microsoft.com/en-us/library/ms733040.aspx

Resources