Should you always disconnect from a WCF service between calls? - windows

I am using a WCF service to administer a windows service running on a remote machine. We have an administration client we use for modifying the windows service configuration, as well as monitoring the state of the service in real-time. For the real-time monitoring, we poll the service for it's state every second.
Currently, we are leaving the client connected all the time while monitoring the service but I continually read that it is recommended to connect and disconnect for each call, much like you would for a database.
Would it be recommended in our situation where we are making frequent calls to the service, or would connecting and disconnecting add too much overhead to the process?
Thanks

By default and as a recommended best practice, you're using the per-call activation in WCF, e.g. each request to your WCF service gets a new instance of a service class, that instance handles your request, returns a result, and then is disposed.
In this case, I don't really see any point in constantly breaking and re-establishing the communication channel (e.g. constantly disposing and re-creating the proxy client). There's nothing on the WCF service side that "lingers around" in memory and takes up resources or anything like that. Also, contrary to most databases, there's usually no "per-connection" licensing or anything involved, either.
What you need to be able to deal with in this scenario would be a situation where your communication channel goes into "faulted state", e.g. when something bad happens - when the service call fails and throws an exception, or when a network fluke causes your channel to break. In such a case, you need to have recovery mechanisms on your client side to handle this and re-establish the connection again.
The situation might be a bit different if you have session-oriented WCF services - but those should definitely be the exception, and only used when needed by all means.

Related

EWS - One or more subscriptions in the request reside on another Client Access server

I got this error when I'm using streaming subscription with impersonation.
After the connection opened and receive notification successfully for minutes, it just pops up a bunch of this for almost all subscriptions.
How can I avoid this error?
One or more subscriptions in the request reside on another Client Access server. GetStreamingEvents won't proxy in the event of a batch request., The Availability Web Service instance doesn't have sufficient permissions to perform the request
I need to keep the connection stable and avoid this error.
Sounds like you haven't use affinity https://learn.microsoft.com/en-us/exchange/client-developer/exchange-web-services/how-to-maintain-affinity-between-group-of-subscriptions-and-mailbox-server
Also if its a multi threaded application ExchangeService isn't thread safe and shouldn't be used across multiple threads.

Using a ServerStreaming rpc call for long running notifications channel

I am thinking about using a gRPC service to facilitate notifications between two services. (as an aside, I will be using protobuf-net/ protobuf-net.Grpc) The intent is that the client service would establish and maintain a connection to the server service, and react to notifications over time. In an perfect technology world where there are no network blips, no server restarts, etc the idea would be to establish this connection once and have that server streaming call live for the lifetime of the application. Obviously in the real world we need to deal with retries, reconnects, fail-overs etc.
My question is: Is calling a server streaming call in grpc and keeping the call open for long periods of time an appropriate use of server streaming calls, or is it an abuse of that feature?
This is a perfectly fine use case for gRPC. gRPC is designed for this kind of use.
Yes, you have to deal with reconnections or more exactly reestablishment of streams when the connection to the server dies.

Communicate to stateless web Api service from a different application in Azure Service Fabric

I have two different service fabric applications. Both are stateless web api models. I do have a situation that from service 1 inside application 1, I need to invoke service 2 which is part of application 2. I am deploying both applications in the same cluster. Can someone advise the best practice here. What could be best way to communicate. Please provide some sample as well.
Fabric Transport (aka Service Remoting) is the sdk built-in communication model. Compared to communication over HTTP or WCF it does a little more, especially on the client side of the communication.
When it comes to communicating with Service Fabric services (or really, any distributed systems service) your communication should take into account that the connection could be fail to established on an initial try, or be interrupted mid communication and that you really shouldn't build your solution to expect it to always work flawlessly. The reason for this is in the nature of how Service Fabric at any time can decide to move primaries from a node to another node, the nodes themselves can go down and the services can crash. Nothing strange about he great thing with Service Fabric is that it does a lot of the heavy lifting for you when it comes to maintaining your services and nodes over time.
So, in terms of communication this means that a client needs to be able to do three things (for it to truly work in a distributed environment);
resolve the address to the service (figure out which node it is on, which port it is listening on, which partition id and replica to target and so on)
connect to the service, package and send requests and then recieve and unpack responses
retry the resolve and connect if the communication fails
Fabric Transport does all this when you are using the Service Remoting clients (like ServiceProxy) and service side listeners.
Thats the good part with Fabric Transport, you get all that out of the box and most of the time you don't have to change the default setup either. The bad part is that it only works for communication inside the cluster, i.e. you cannot communicate from the outside to a service running in the cluster using Fabric Transport. For that you need HTTP or WCF.
HTTP(s) and WCF (over HTTP(s)) communication allow you to build your own clients and handle the communication yourself. There are a number of samples on how you can do the resolve, connect and retry for HTTP clients, this one for instance
According to Microsoft there are three built-in communication options. It's up to you to decide which one works best for you. I'm personally using service remoting which is easy to quickly set up. It also allows you to exception handling in your client service.

Using SignalR to push to clients from a long running process

Firstly, here is state of my application:
I have a request coming in from a client (angularjs app) into my API (web api 2). This request is processed and a record is stored in a database. A response is then sent back to the client.
Currently, I have a windows service polling and processing this record(s).
Processing this record can be long running. As a side effect to processing this record, there might be notifications generated to be sent back to one or more clients.
My question is how do I architect this, such that I can utilise SignalR to be able to push the notifications back to the client.
My stumbling block:
I can register and store (in-memory backed by a db) the client's SignalR connectionid along with the application's own user identifier. This way I can match a generated notification with a signalr client.
At the moment, I'm hosting the SignalR hubs within the IIS process. So how do I get back from the Windows Service to IIS to notify the client when a notification is generated?
Furthermore, I should say I am already using SignalR elsewhere in the application and am using a SQL Server backplane.
The issue's with the current architecture:
Any processing is done in the same web request, and notifications are sent out via SignalR before a response to the client is returned. Luckily, the processing is minimal and very quick.
I think this is not very good in terms of performance or maintenance in the long run.
Potential solutions:
Remove SignalR hubs from IIS and host them somewhere else - windows service?
Expose an endpoint on the API to for the windows service to call to push the notification once a notification is generated?
Finally, to add more ingredients to the mix: Use a service bus to remove the polling component of the windows service, and move to a pub/sub architecture. Although this is more work than I want to chew off right now.
Any ideas/recommendations/constructive criticisms are welcome.
Thanks.
Take a look at this sample for starters
Another more advanced solution can be using a backplane to manage the communications between the front end and the backend...
HTH

Duplicate underlaying WCF calls

I have WCF client on my WPF app. WCF client is generated with asynchronous operations.
I am doing parallel calls with awaiting to Tasks.
I noticed some delay on data getting and when sniffed traffic with Microsoft Message Analyzer, noticed, that for some calls I did 2 request were sent with about 500ms interval but got one response.
In my app I have only one call.
Question is why 2 underlying calls were sent by WCF client?
P.S. I checked by hosting service under IIS and IIS express. Same result on both cases.
Your issue here is not with your client or service, but with your analysis tooling.
Microsoft Message Analyzer is designed for low level network monitoring.
Higher level protocols such as SOAP will almost certainly utilise more than one network message per logical call.
WCF supports lower-level protocols such as UDP, where the number of messages on the network may bear more resemblance to the number of service calls you make, but this is buy no means garanteed.
As such, the service itself is the ultimate arbiter of how many logical service calls it has received.
If you do need to analyse the underlying traffic, you could also look at WCF Tracing, which will group network calls together into "conversations", that resolve to a single instance of a client-service request/response pair.

Resources