Channel Factory vs Service Proxy - performance

When to use Channel Factory, and When to use Service Proxy in WCF?
My binding is NetNamedPipeBinding. and I'm planning to use a Duplex connection.

When to use a proxy?
We create proxy using svcutil.exe. The output of this tool gives a proxy class and makes corresponding changes to the application configuration file. If you have a service that you know is going to be used by several applications or is generic enough to be used in several places, you'll want to continue using the generated proxy classes. We use proxy in WCF to be able to share the service contract and entities with the client. Proxies have several restrictions like they need to have gets and sets , contructors can't be exposed , methods other than the service contract cannot be exposed, repetition of code, everytime that we add/modify a service contract/data contract/message contract we need to re-generate the proxy for the client.
When to use ChannelFactory
The other option is using the ChannelFactory class to construct a channel between the client and the service without the need of a proxy . In some cases, you may have a service that is tightly bound to the client application. In such a case, it makes sense to reference the Interface DLL directly and use ChannelFactory to call your methods using that. One significant advantage of the ChannelFactory route is that it gives you access to methods that wouldn't otherwise be available if you used svcutil.exe..
When to use a ChannelFactory vs Proxy class?
A DLL is helpful if the client code is under you control and you'd like to share more than just the service contract with the client -- such as some utility methods associated with entities and make the client & the service code more tightly bound. If you know that your entities will not change much and the client code is less, then a DLL would work better than a proxy. If the client to your service is external to the system, such as API, it makes sense to use a proxy, because it makes sharing the contract easier by giving a code file rather than a DLL.
In case of NetNamedPipeBinding
It's recommended to use ChannelFactory for the following two reasons:
The easy of use.
avoiding the proxy layer means extra performance.

Channel Factory and Service Proxy are equal features for getting one aim - consume you service. Usually if you control service contract interface both on you client and server, you'd better use ChannelFactory, because it is managed more easier. If you manage only client part - Proxy is a way to go, because othewise you would not be able to control the changes, made on the server side. Besides Proxy gives you a nice tool of generating async methods for your service :)

Related

Simple application in Microservices

I am a newbie in Microservices, having theoretical knowledge. I want to make a small application in Microservices. Can anyone please help me with the idea of how to implement microservices?
Thanks in Advance!!
You can create something like a currency conversion app with three microservices like these:
Limit service;
Exchange service;
Currency conversion service.
Limit service and currency conversion service can communicate with the database for retrieving the values of the limits and currencies conversion.
For more info check github.com/in28minutes and look after a microservice repository.
No matter how perfect the code of your microservice is, you may face issues with support and development if the microservice architecture doesn’t work according to certain
rules.
The following rules can help you with microservices a lot:
You have to do everything by yourself because you do not have any Rails and architecture out of the box that can be started by one command. Your microservice should load libraries, establish client connections, and be able to release resources if it stops working for any reason.
It means that being in the microservice folder and having made the 'ruby server.rb' command (a file for starting a microservice) we should make the microservice do the following:
Load used gems, vendor libraries (if used), and our own libraries
Use the configuration (depend on the environment) for adapters or classes of client connections
Establish client connections (permanent connections are meant here). As your microservice should be ready for any shutdowns, you should take care of closing these client connections at such moments. EventMachine and its callback mechanism helps a lot with this.
After that your microservice should be loaded and ready for work.
Incapsulate your communication with the services into abstractly named adapters. We name these adapters based on their role (PubSub, SMSMessenger, Mailer, etc.). This way, we can always change the inner implementation of these adapters by replacing the service if the names of our classes are service agnostic.
For example, we almost always use Redis in our application from the very beginning, thus it is also possible to use it as a message bus, so that we don’t have to integrate any other services. However, with the application growth we should think about solutions like RabbitMQ which are more appropriate for cases like ours.
If your code is designed in such a way that your classes are coupled with each other, do it according to the dependency inversion principle. This will help your code to avoid issues with lib booting.
Learn more here
You can try splitting an existing Monolithic application to gain perspective on microservice architecture.
I wrote this article, which talks about splitting a Django App into microservices. Hope it helps.

How to consume WSDL that defines overloaded functions, using Groovy or Ruby?

It is common for WSDL generated by Java to contain multiple function definitions with the same function name, differing only by argument type or number.
This poses problems when attempting to consume the WSDL from other languages (particularly languages which don't handle overloading well or at all). For example:
Groovy's WSClient fails outright during initialisation:
java.lang.IllegalArgumentException: An operation with name
[{http://example.com/service-v1}overloadedFunction]
already exists in this service
Ruby's wsdlDriver doesn't fail immediately, but only one version of the overloaded function definitions is invokable (the others seem to be unusable).
Assuming I'm unable to modify the service, Is there a good way to handle this? Perhaps an option on these SOAP client libraries, different libraries, or a well-established transform of the WSDL?
Genesis/Restatement of the problem:
The issue is the generation of consumer side proxies for communicating with a Web Service over SOAP, where the WSDL is not using WS-I Basic profile - specifically by exposing operations of the same name under the same PortType.
Addressing the specifically mentioned client generators:
Groovy's WSClient explicitly states for the module overview:
"If you need to quickly consume and/or publish WS-I compliant web services, GroovyWS can help you."
The Ruby language allows for classes to have methods of the same name, but the last one defined is the only one that the runtime will ever execute.
Options:
Create an Intermediary:
Using a language that (easily) supports the creation of client proxies for overloaded PortType Operations, create a new web service which exposes the services in a WS-I Basic Profile compataible manner, and proxies the requests back up to the original service. This is a manifestation of the Adapter pattern.
Pros: This adapter will serve any consumer capable of generating proxies for WS-I Basic Profile compliant WSDLs. Also, if the Provider service changes you may be able to change the intermediary service without changing the service interfaces it provides to your consuming programs.
Cons: You will need to set up your own server for the web services.
Change the Generated Code:
This is a great solution for the Ruby wsdlDriver issue, since the proxy generator successfully generates a proxy method per operation. All you need to do is change the method names in the Ruby class to be unique.
Pros: Very easy.
Cons: Only applies to a subset of client generators, such as Ruby's wsdlDriver. Also, the proxies will have to be edited every time they are regenerated.
Give the client generators an altered WSDL:
Download the WSDL and manually change it to expose only the definition of the operation that you want, or rename the operations so they have unique names (change MyOp, MyOp, MyOp -> MyOp1, MyOp2, MyOp3). You will almost certainly need to alter the generated code.
Pros: Simplicity.
Cons: If you have a large or ever changing number of WSDL documents to process, this can be time consuming. Also, the proxies will have to be edited every time they are regenerated.

Should I build a REST backend for GWT application

I am planning a new application and have been experimenting with GWT as a possible frontend. The design question I am facing is this.
Should I use
Option A: GWT-RPC and build the app quickly
Option B: Build a REST backend using Spring MVC 3.0 with all the great #Controller, #Service, #Repository annotations and build a client side library to talk to the backend using the GWT overlay features and the GWT Request builder?
I am interested in all the pros and cons and people experiences with this type of design?
Ask yourself the question: "Will I need to reuse the server-side interface with a non-GWT front-end?"
If the answer is "no, I'll just have a GWT client": You can use GWT-RPC, and take advantage of the fact that you can use your Java objects both on the server and the client-side. This can also make the communication a bit more efficient, at least when used with <inherits name="com.google.gwt.user.RemoteServiceObfuscateTypeNames" />, which shortens the type names to small numeric values. You'll also get the advantage of better error handling (using Exceptions), type safety, etc.
If the answer is "yes, I'll make my service accessible for multiple kinds of front-ends": You can use REST with JSON (or XML), which can also be understood by non-GWT clients. In addition to switching clients, this would also allow you to switch to a different server implementation (maybe non-Java) in the future more easily. The disadvantage is, that you'll probably have to write wrappers (JavaScript Overlay Types) or transformation code on the GWT client side to build nice Java objects from the JSON objects. You'll have to be especially careful when you deploy a new version of the service, which brings us back to the lack of type safety.
The third option of course would be to build both. I'd choose this option, if the public REST interface should be different from the GWT-RPC interface anyway - maybe providing just a subset of easy to use services.
You can do both if use also use the RestyGWT project. It will make calling REST based JSON resources as easy as using GWT-RPC. Plus you can typically reuse the same request response DTOs from the server side on the client side.
We ran into the same issue when we created the Spiffy UI Framework. We chose REST and I would never go back. I'd even say GWT-RPC is a GWT Anti-pattern.
REST is a good idea even if you never intend to expose your REST endpoints. Creating a REST API will make your UI faster, your API better, and your entire application more maintainable.
I would say build a REST backend. In my last project we started by developing using GWT-RPC for the first few months, we wanted fast bootstrapping. Later on, when we needed the REST API, it was so expensive to do the refactoring we ended up with two backend APIs (REST and RPC)
If you build a proper REST backend, and a deserialization infra on the client side (to transform the json\xml to GWT Java objects) then the benefit of the RPC is almost nothing.
Another sometimes forgotten advantage of the REST approach is that it's more natural to the browser running the client, RPC is a propitiatory protocol, where all the requests are using POST. You can benefit from client side caching when reading resources in the standard way.
Answering ams comments:
Regarding the RPC protocol, last time I "sniffed" it using firebug it didn't look like json, so I don't know about that. Though, even if it is json based, it still uses only the HTTP POST method to communicate with the server, so my point here about caching is still valid, the browser won't cache POST requests.
Regarding the retrospective and what could have done better, writing the RPC service in a resource oriented architecture could lead later to easier porting to REST. remember that in REST one usually exposes resources with the basic CRUD operations, if you focus on that approach when writing the RPC service then you should be fine.
The REST architectural style promotes inspectable messages (which aids debugging and security), API evolution, multiple platforms, simple interfaces, failure recovery, high scalability, and (optionally) extensible systems via code on demand. It trades per-interaction performance for overall network efficiency. It reduces the server's control over consistent application behavior.
The "RPC style" (as we speak of it in opposition to REST) promotes platform uniformity, interface variability, code generation (and thereby the ability to pretend the network doesn't exist, but see the Fallacies), and customized interactions. It trades overall network efficiency for high per-interaction performance. It increases the server's control over consistent application behavior.
If your application desires the former qualities, use the REST style. If it desires the latter, use the RPC style.
If you're planning on using using Hibernate/JPA on the server-side and sending the resulting POJO's with relational data in them to the client (ie. an Employee object with a collection of Phones), definitely go with the REST implementation.
I started my GWT project a month ago using GWT RPC. All was well until I tried to serialize an object from the underlying db with a One-To-Many relationship in it. And got the dreaded:
com.google.gwt.user.client.rpc.SerializationException: Type 'org.hibernate.collection.PersistentList' was not included in the set of types which can be serialized by this SerializationPolicy
If you encounter this and want to stay with GWT RPC you will have to use something like:
GWT Request Factory (www.gwtproject.org/doc/latest/DevGuideRequestFactory.html) - which forces you to write 3+ classes/interfaces per POJO you want to share with the client. OUCH!
Gilead (sourceforge.net/projects/gilead/) - which appears to a dead project.
I'm now using RestyGWT. The switch was fairly painless and my POJO's serialize without issue.
I would say that this depends on the scope of your total application. If your backend should be used by other clients, needs to be extendable etc. then create a separate module using REST. If the backend is to be used by only this client, then go for the GWT-RPC solution.

Why are WCF Service Reference name spaces relative to my WCF client project's default namespace?

I have a WCF service with a namespace called:
MyCompany.MyApplication.Configuration.ConfigurationHelperService
On the client side I have an assembly called which consumes this service:
MyCompany.MyApplication.Core (this is the default namespace)
When I add the service reference, the namespace I'm asked to specify in the Add Service Reference dialogue ends up getting tacked on the end of the client assembly namespace:
MyCompany.MyApplication.Core.MyCompany.MyApplication.Configuration
.ConfigurationHelperService
Because I'm asked for a namespace at this time it seems natural to specify the name of the remote service namespace. i.e. I'd like to refer to my remote service classes using their namespace MyCompany.MyApplication.Configuration.ConfigurationHelperService because they're technically not part of the client.
My questions are:
What's the rationale behind this, is this something to do with semantics?
Should I try to resist changing this behaviour by modifying the client side generated source to get the namespace I want?
I've lived with this for a long time (you have the same problem with ASMX web service clients) but have never seen a written down explanation why Visual Studio (and I guess svcutil.exe) works this way.
Well, I think you have two choices, really:
if you control both ends of the wire, e.g. you write the server and the client, you could put all the shared items like service contracts, data contracts etc. into a separate assembly and share that between client and server. That way, nothing would be duplicated, and both ends of the communication would refer to the exactly identical items in a given namespace of your choice
get used to the fact that if you add a WCF service reference in Visual Studio, you're basically getting a whole slew of duplication - because if you're not controlling both ends of the communication, that's really all WCF can go on - the metadata exchanged between service and client (through the WSDL or the MEX endpoint on the service). And since this clearly is part of the client, which is completely separate from the service (all they share, typically, are the wire-formats defined in the XML schema - nothing else), its namespace will also be client-oriented. I think this is a (good) feature, and not something I'd try to combat.....
By default, in a SOA world using WCF, the client and the service are totally independant of one another. There's no "remote object" connection or anything like that between the two: the client proxy has a method call happen, bundles up those parameters passed in plus some information what method on the server to call, and serializes it all up into a serialized message (read: a text / XML message, basically). That message is sent across the wire to the server which then handles that message and returns a response.
So this is not just a .NET function call or something - those two pieces of your system are (by default) absolutely independent of one another. Considering that, to me at least, it makes sense that everything the client does will be placed in the client's namespaces - after all, the server could be something totally different, like Java, PHP, a IBM mainframe - you typically don't have any clue what it is (and don't need to).

Unit testing a module that checks internet connectivity

I have a C# module responsible for acquiring the list of network adapters that are "connected to the internet" on a windows Vista machine. The module uses the "Network List Manager API" (or NLM API) to iterate over all network connections and returns all those for which the IsConnectedToInternet value is true.
I received some suggestions for the implementation of this module in this SO question
To test this module I've decided to write a helper that returns the list of internet connected interfaces based on another logic, so it would be a sort of a "reality check" for the original module's logic. Note that for the test helper I am willing to use detection methods that might be considered bad practice for production code (e.g. relying on some internet resource like "Google" to be available - in case it shuts down, blocked by our internal firewall etc. it's relatively easy to fix the test as opposed to a deployed product base).
The alternative detection method I chose was to try to connect to "www.google.com:80" with a TcpClient. My problem: When I have more than one connected adapter (e.g. both wireless and LAN) the detection method fails for one of them with the error "A connect request was made on an already-connected socket".
My question is three fold:
How would you go about testing such a module in general? Do you support the idea of doing the same thing in a different way and comparing the results or is it an overkill and I should rely on the system's API? My main problem here, is that it's very hard to pre-configure the system so that I'll know what the expected results are in advance.
What alternative logic would you suggest? One thing that was suggested in the aforementioned question was looking at the routing table - what about considering each adapter that has a routing entry with a destination of 0.0.0.0 as "connected to the internet"? Other suggestions?
Do you understand why I get the "already-connected" error with the current test logic?
I can only answer your question about the unit test.
The code you're testing is, in your own words, "a C# module responsible for acquiring the list of network adapters that are 'connected to the internet' on a windows Vista machine. The module uses the 'Network List Manager API' (or NLM API) to iterate over all network connections and returns all those for which the IsConnectedToInternet value is true."
If I were writing this module, I would first use an interface for the NLM API, call it...NLMAPIService. Now, for the real code, create an Adapter that implements NLMAPIService and adapts the real NLM API.
For testing, create a class FakeNLMAPI that implements NLMAPIService and has all of its data in-memory somewhere, or in an XML file, or whatever. Your module calls methods only on the NLMAPIService, so you don't have to change any "real" code depending on whether you're testing or not.
Therefore, in your test setup method, you can instantiate FakeNLMAPI and pass it to your module, and in production, instantiate your NLM API Adapter.
I'm going to assume that you can instantiate and modify the object that represents a network connection. If not, you can follow the same pattern for faking the actual network connection object.
Dependency Injection is a very handy pattern to deal with issues like this. Instead of simply using the NLM API components directly in your code define an interface and a class that implements it and serves as a proxy to the NLM API. Pass an instance of this class to your module in the constructor and have your module use it. In your unit tests, instead of the real proxy object, use a mock object that returns known information -- it doesn't even have to reference the NLM API -- to use in testing the logic of your module. Granted, your proxy class will need some testing as well, but the logic in it is much simpler -- probably just some data marshaling. You might be able to convince yourself of its correctness or, if not, do some manual testing on it to make sure that it is working properly.
UnitTests shouldn't access to external resources. To UnitTest your method, I would stub out the Network List Manager API.
You still need an acceptance test layer. In that test environment you should replicate various configurations you expect to support in your environment, setup your own webhosts, routers, machine config. Acceptance testing should be done at the user experience level using a tool like Fitnesse.

Resources