EWS - One or more subscriptions in the request reside on another Client Access server - outlook

I got this error when I'm using streaming subscription with impersonation.
After the connection opened and receive notification successfully for minutes, it just pops up a bunch of this for almost all subscriptions.
How can I avoid this error?
One or more subscriptions in the request reside on another Client Access server. GetStreamingEvents won't proxy in the event of a batch request., The Availability Web Service instance doesn't have sufficient permissions to perform the request
I need to keep the connection stable and avoid this error.

Sounds like you haven't use affinity https://learn.microsoft.com/en-us/exchange/client-developer/exchange-web-services/how-to-maintain-affinity-between-group-of-subscriptions-and-mailbox-server
Also if its a multi threaded application ExchangeService isn't thread safe and shouldn't be used across multiple threads.

Related

Microservice blocked user persist in db and verify in other micro services

Description of our project
We are following micro services architecture in our project with database per service. We are trying to introduce blacklist function to our services which means if some user blacklisted to enter system, they can't use any of our micro services. We have multiple entry/exit points to our micro services such as gateway service (this gateway service will be used by frontend team), websocket message receivers, multiple spring schedulers to process the user data.
Current solution
We persist the blacklist users in db and exposed as an endpoint and we can name this as access service. Persisting this blacklist users to the database will be done by support team by calling the access service blacklist create endpoint. So whenever we receive a request from frontend, in gateway we will use the access service to check if the current user is present in the blacklist db, if it's blacklisted then we will block further access. The same goes to whatever message received from schedulers or websocket notifications i.e for example for each call we check whether the user is blacklisted.
Problem statement
We have 2 websocket notification receivers, multiple schedulers which will run for every 5 minutes which intern wants to access the same blacklist access service. Because of this we are making too many calls to our access service and causing this to be a bottleneck.
How do we avoid this case?
There are several approaches to the blocklisting problem.
First you could have one service with a blocklist and for every incoming request for every service you would do an extra call to blocklist service. Clearly, this is a huge availability and scalability risk.
Second option is push based: the blocklist service keeps notifying all other services about blocklisted users. In that case, every service can make a local decision to process a request.
The third option is to bake expiration into user sessions. Every session has three elements: expiration, access token and refresh token. Before expired every service will accept requests with valid access tokens. If an access token is expired, then the client has to get a new one by contacting a token service. That service will read refresh token and check if the user is still active and if that's the case - a new access token will be issues.
The third option is the one widely used. Most(all?) cloud providers have shorted lived credentials for this specific goal - to make sure an access can be revoked after some time.
Short lived credentials vs a dedicated service is a well known trade-off; you could read more details about very similar problem here: https://en.wikipedia.org/wiki/Certificate_revocation_list

How to funnel an API call to a specific service fabric node

I have exposed a websocket enabled service endpoint through Azure Application Gateway and the service is hosted on azure service fabric. Client initiates a websocket connection with my endpoint and is able to exchange data. During certain message flows, my Web Socket enabled service calls other services hosted on the service fabric using azure service bus. These are handled in a completely async manner. Once the other services finish processing, they post a message to the service bus which my WebSocket service reads back.
The problem I am having is to route the messages back to the right service fabric node so that it can be pushed back to the client at the other end of the WebSocket connection
In the picture below, you can imagine each node containing multiple services including the web socket enabled service. Once the Websocket service posts a message to the service bus, the downstream services start processing and finally they post a message back to the service bus which the websocket service reads back. Here a random node will pick up the message and it might not have the relevent websocket connection to push the processed data back
Sample Design
I have looked at redis pubsub model and it looks like I have to maintain last message processed on the nodes. It also means, every node on the cluster will need to read the message and discard it if they don't have the websocket connection with the client. I am looking for any suggested design models for this kind of problem
I ran into a similar scenario and didn't like the idea of using a new external service (Redis/SQL Server) as a backplane that would simply duplicate each message/event across all nodes.
The solution I settled on was to lean on a property of actor proxies, using actor events to call-back to a specific instance of a stateless service. Creating an actor service to act as a pub/sub backplane.
The solution is summarised in this blog post and this GitHub repo. It's worth pointing out that the documentation states actor events are best effort. This hasn't really been an issue when the application is running as normal, I presume that during a deployment or failover, some events may get lost, however this could be mitigated with additional work.
It's also worth noting that your load balancing rules should maintain sticky connections between clients and back-end instances. You could create separate rules for websockets if you only wanted this to apply to them and not your regular HTTP traffic.

Using SignalR to push to clients from a long running process

Firstly, here is state of my application:
I have a request coming in from a client (angularjs app) into my API (web api 2). This request is processed and a record is stored in a database. A response is then sent back to the client.
Currently, I have a windows service polling and processing this record(s).
Processing this record can be long running. As a side effect to processing this record, there might be notifications generated to be sent back to one or more clients.
My question is how do I architect this, such that I can utilise SignalR to be able to push the notifications back to the client.
My stumbling block:
I can register and store (in-memory backed by a db) the client's SignalR connectionid along with the application's own user identifier. This way I can match a generated notification with a signalr client.
At the moment, I'm hosting the SignalR hubs within the IIS process. So how do I get back from the Windows Service to IIS to notify the client when a notification is generated?
Furthermore, I should say I am already using SignalR elsewhere in the application and am using a SQL Server backplane.
The issue's with the current architecture:
Any processing is done in the same web request, and notifications are sent out via SignalR before a response to the client is returned. Luckily, the processing is minimal and very quick.
I think this is not very good in terms of performance or maintenance in the long run.
Potential solutions:
Remove SignalR hubs from IIS and host them somewhere else - windows service?
Expose an endpoint on the API to for the windows service to call to push the notification once a notification is generated?
Finally, to add more ingredients to the mix: Use a service bus to remove the polling component of the windows service, and move to a pub/sub architecture. Although this is more work than I want to chew off right now.
Any ideas/recommendations/constructive criticisms are welcome.
Thanks.
Take a look at this sample for starters
Another more advanced solution can be using a backplane to manage the communications between the front end and the backend...
HTH

Realtime connection (SockJS/Socket.io) and Microservice application

Currently I'm building an application in a micro service architecture.
The first application is an API that does the user authentication, receive requests to initiate/keep a realtime connection with the user (via Socket.io or SockJS) and the system store the socket id into the User object.
The second application is a WORKER doing some stuff and sometime he has to send realtime data to the user.
The question is: How should the second application (the WORKER) send realtime data to the user?
Should the WORKER send a message to the API then the API forward this message to the user?
Or the WORKER can directly send the message to the user?
Thank you
In a perfect world example, the service that are responsible to send "publish" a real time push notifications should be separated from other services. Since the micro service is a set of narrowly related methods, and there is no relation between the authentication "user" service, and the realtime push notification service. And to a deep break down, the authentication actually is a separate service, this only FYI, There might be a reason you did this way.
How the service would communicate? There is actually many ways how to implement the internal communication between the services, MQ solution, which could add more technology to your stack, like Rabbit MQ, Beanstalk, Gearman, etc...
And also you can do the communication on top of HTTP protocal, but you need to consider that the HTTP call will add more cost.
The perfect solution is that each service will have to interfaces to execute on their behalf, HTTP interface and an MQ interface (console)

Design of queues with JMS & QPID

I am currently assigned the task of writing an application which will use JMS API to communicate using Apache qpid as the JMS provider.
so basically my application will have multiple instances of a server running.Each server will serve a set of unique desks.so each instance will only have data for the desks it is serving.
There will also be multiple instances of clients each configured by desk again.
Now when the client starts up, it will request the data for the desk it is serving to the servers.The request should only go to the server that has that desk data loaded and the response should only go back to the client who requested the data for that desk.
I am thinking of using queues for this.i am not sure if i should create only one request queue which will be used by all the servers or i should create seperate queues for each server.
For response ,I am planning to use temporary queues.
Not that the request from the client to the server is not very often.Say each client may send around 50 requests a day.
can someone please tell me if this is a good design?

Resources