Why are WCF Service Reference name spaces relative to my WCF client project's default namespace? - visual-studio

I have a WCF service with a namespace called:
MyCompany.MyApplication.Configuration.ConfigurationHelperService
On the client side I have an assembly called which consumes this service:
MyCompany.MyApplication.Core (this is the default namespace)
When I add the service reference, the namespace I'm asked to specify in the Add Service Reference dialogue ends up getting tacked on the end of the client assembly namespace:
MyCompany.MyApplication.Core.MyCompany.MyApplication.Configuration
.ConfigurationHelperService
Because I'm asked for a namespace at this time it seems natural to specify the name of the remote service namespace. i.e. I'd like to refer to my remote service classes using their namespace MyCompany.MyApplication.Configuration.ConfigurationHelperService because they're technically not part of the client.
My questions are:
What's the rationale behind this, is this something to do with semantics?
Should I try to resist changing this behaviour by modifying the client side generated source to get the namespace I want?
I've lived with this for a long time (you have the same problem with ASMX web service clients) but have never seen a written down explanation why Visual Studio (and I guess svcutil.exe) works this way.

Well, I think you have two choices, really:
if you control both ends of the wire, e.g. you write the server and the client, you could put all the shared items like service contracts, data contracts etc. into a separate assembly and share that between client and server. That way, nothing would be duplicated, and both ends of the communication would refer to the exactly identical items in a given namespace of your choice
get used to the fact that if you add a WCF service reference in Visual Studio, you're basically getting a whole slew of duplication - because if you're not controlling both ends of the communication, that's really all WCF can go on - the metadata exchanged between service and client (through the WSDL or the MEX endpoint on the service). And since this clearly is part of the client, which is completely separate from the service (all they share, typically, are the wire-formats defined in the XML schema - nothing else), its namespace will also be client-oriented. I think this is a (good) feature, and not something I'd try to combat.....
By default, in a SOA world using WCF, the client and the service are totally independant of one another. There's no "remote object" connection or anything like that between the two: the client proxy has a method call happen, bundles up those parameters passed in plus some information what method on the server to call, and serializes it all up into a serialized message (read: a text / XML message, basically). That message is sent across the wire to the server which then handles that message and returns a response.
So this is not just a .NET function call or something - those two pieces of your system are (by default) absolutely independent of one another. Considering that, to me at least, it makes sense that everything the client does will be placed in the client's namespaces - after all, the server could be something totally different, like Java, PHP, a IBM mainframe - you typically don't have any clue what it is (and don't need to).

Related

Can you consume a soap/rest service from within the same KB?

If you have defined a soap or rest service within your GeneXus 16u5 Knowledge Base, can you then create an external object for that service and invoke it from within the same KB?
I seem to remember in the past that this was not possible. If it is still the case, is it because of the tools that generate the external object?
Sure you can, it's not common but you have to be careful with the externals objects names. To avoid errors during the WSDL import, in the second step you've to:
1- Change the name of the "External Name Object" to something different of the "internal" object.
2- Keep or put something in the Prefix box. If you leave it blank, a conflict with the "internal" SDT will occur.
Regarding REST, I always consume them in a manually way, so I guess there would be no problem doing it that way.

Channel Factory vs Service Proxy

When to use Channel Factory, and When to use Service Proxy in WCF?
My binding is NetNamedPipeBinding. and I'm planning to use a Duplex connection.
When to use a proxy?
We create proxy using svcutil.exe. The output of this tool gives a proxy class and makes corresponding changes to the application configuration file. If you have a service that you know is going to be used by several applications or is generic enough to be used in several places, you'll want to continue using the generated proxy classes. We use proxy in WCF to be able to share the service contract and entities with the client. Proxies have several restrictions like they need to have gets and sets , contructors can't be exposed , methods other than the service contract cannot be exposed, repetition of code, everytime that we add/modify a service contract/data contract/message contract we need to re-generate the proxy for the client.
When to use ChannelFactory
The other option is using the ChannelFactory class to construct a channel between the client and the service without the need of a proxy . In some cases, you may have a service that is tightly bound to the client application. In such a case, it makes sense to reference the Interface DLL directly and use ChannelFactory to call your methods using that. One significant advantage of the ChannelFactory route is that it gives you access to methods that wouldn't otherwise be available if you used svcutil.exe..
When to use a ChannelFactory vs Proxy class?
A DLL is helpful if the client code is under you control and you'd like to share more than just the service contract with the client -- such as some utility methods associated with entities and make the client & the service code more tightly bound. If you know that your entities will not change much and the client code is less, then a DLL would work better than a proxy. If the client to your service is external to the system, such as API, it makes sense to use a proxy, because it makes sharing the contract easier by giving a code file rather than a DLL.
In case of NetNamedPipeBinding
It's recommended to use ChannelFactory for the following two reasons:
The easy of use.
avoiding the proxy layer means extra performance.
Channel Factory and Service Proxy are equal features for getting one aim - consume you service. Usually if you control service contract interface both on you client and server, you'd better use ChannelFactory, because it is managed more easier. If you manage only client part - Proxy is a way to go, because othewise you would not be able to control the changes, made on the server side. Besides Proxy gives you a nice tool of generating async methods for your service :)

How to consume WSDL that defines overloaded functions, using Groovy or Ruby?

It is common for WSDL generated by Java to contain multiple function definitions with the same function name, differing only by argument type or number.
This poses problems when attempting to consume the WSDL from other languages (particularly languages which don't handle overloading well or at all). For example:
Groovy's WSClient fails outright during initialisation:
java.lang.IllegalArgumentException: An operation with name
[{http://example.com/service-v1}overloadedFunction]
already exists in this service
Ruby's wsdlDriver doesn't fail immediately, but only one version of the overloaded function definitions is invokable (the others seem to be unusable).
Assuming I'm unable to modify the service, Is there a good way to handle this? Perhaps an option on these SOAP client libraries, different libraries, or a well-established transform of the WSDL?
Genesis/Restatement of the problem:
The issue is the generation of consumer side proxies for communicating with a Web Service over SOAP, where the WSDL is not using WS-I Basic profile - specifically by exposing operations of the same name under the same PortType.
Addressing the specifically mentioned client generators:
Groovy's WSClient explicitly states for the module overview:
"If you need to quickly consume and/or publish WS-I compliant web services, GroovyWS can help you."
The Ruby language allows for classes to have methods of the same name, but the last one defined is the only one that the runtime will ever execute.
Options:
Create an Intermediary:
Using a language that (easily) supports the creation of client proxies for overloaded PortType Operations, create a new web service which exposes the services in a WS-I Basic Profile compataible manner, and proxies the requests back up to the original service. This is a manifestation of the Adapter pattern.
Pros: This adapter will serve any consumer capable of generating proxies for WS-I Basic Profile compliant WSDLs. Also, if the Provider service changes you may be able to change the intermediary service without changing the service interfaces it provides to your consuming programs.
Cons: You will need to set up your own server for the web services.
Change the Generated Code:
This is a great solution for the Ruby wsdlDriver issue, since the proxy generator successfully generates a proxy method per operation. All you need to do is change the method names in the Ruby class to be unique.
Pros: Very easy.
Cons: Only applies to a subset of client generators, such as Ruby's wsdlDriver. Also, the proxies will have to be edited every time they are regenerated.
Give the client generators an altered WSDL:
Download the WSDL and manually change it to expose only the definition of the operation that you want, or rename the operations so they have unique names (change MyOp, MyOp, MyOp -> MyOp1, MyOp2, MyOp3). You will almost certainly need to alter the generated code.
Pros: Simplicity.
Cons: If you have a large or ever changing number of WSDL documents to process, this can be time consuming. Also, the proxies will have to be edited every time they are regenerated.

Passing a delegate using remoting without passing the implementation assembly

I have been stuck on this for a few days and I would ping this community for answers before I give up.
*I would like to pass a delegate from a client application to a server application across app domains using remoting.
*The delegate is definition is in an Assembly which is shared between the server and client.
* The delegate it self is an anonymous delegate for which the body is declared on the client side.
My problem is that when I pass the delegate over to the server, the server requires the assembly for which the delegate body is declared(one of the client assembly). Our software architecture prohibits loading the client assembly. In my head when I think about it I should be be able to pass the IL which defines the delegate over to the server, create a delegate using dyanmicMethod and execute it. If that is the case then why does .net require the assembly even when the delegate body contains simple types? Is there a way to remote an assembly without requiring the assembly where the body is declared?
PS: the reason I want to do is for performance. The delegate encapsulates multiple calls to the server. I am unable to modify the server APIs etc to do this.
Thanks fro any information
I don't think you can do this.
When you pass the delegate to the server, the server will need to be able to load the definition of the class that defines the delegate, so there is no way to get a client-only anonymous method to execute on the server side.
There is a discussion on how to work around this at this link. I don't know if you can reorganize your code to align with that pattern.
It would be an intriguing idea to send some IL over to the other side and execute this in place, but I have no idea if that is possible. Sounds like there would be an awful lot of security and other barriers to cross to get this to work.

Starting out, any suggestions?

I have started working in C# for almost a few months and I am looking for something more challenging and interesting. I use a media player called media monkey that supports custom vb scripts, well I made one that writes a file to a dir that has the current song playing, and is updated every time a new song is playing by rewriting what was there before.
Now I want to add this information to a database and keep a record of this and possibly add the information on my home page. I know I can hack a way for it to work, but I want to know what would be the "professional way" of doing things.
I came up with the following and got stuck. I would need an ODBC driver to connect to a database which seems messy, would a web service work? How would that work? Can a VbScript call a dll file to call upon a web service to modify data on a seperate server? Is that safe to do?
Many professional C# apps are n-tier. In your case, you would probably layer it like this:
On the server:
-Database Store
-Database Access/Business layer(sometimes two distinct components, depending on how complex the app is)
-Web Service
On the client:
-Web Service Client
-Any other layers to support client functionality.
So the Database Store would be something like some tables in an Oracle or Microsoft SQL Server, and would on your server.
Database Access/Business layer would be your code that retrieves and stores data to/from your database. It might also contain business objects, which are basically classes that have properties representing your data from your database. The benefit of the data access layer is that sometimes reading and writing to a database can require specialized code, and you don't want that code sprinkled throuought your application. So instead you can call functions in your data access layer that loads needed data into objects, so the rest of your application is just interacting with a regular old .NET object/class. These are called POCOs, which stands for something like Plan Old CLR Object. There are lots of variations on this of course, as people have taken different approaches to the problem of isaloting database access. Also it serves the purpose of minimizing breaking changes whenever the database changes. Since the database access logic is not sprinkled throughout the app, then there are fewer places that need to be updated if the database changes (such as adding new columns to a table or changing a name).
Sometimes the business layer will be it's own layer, and would contain most of the "logic" of the application. It would sit between the data access and web service layers. Using concepts from Service Oriented Architecture (SOA), you might have an authentication service, and a web request handling service. These services are a lot like a class that is always instantiated, there waiting to process requests. Your web request handling service would take a request, and maybe first call into the authentication service to verify credentials before honoring the request. SOA is one of those things I think should be used only when appropriate. It some cases just using Object Oriented techniques will give you the same benefits. Not always though. SOA, when done right, is more scalable, so it really depends on whether SOA offers you additional benefits that you need.
The Webservice would be responsible for receiving requests from the web, parsing/interpreting them, and acting on those requests by making calls into your business layer to update or retrieve data.
So the concept here would be that you could have many users of your service who publish their song updates through your service.
Your client would have a "web service client" layer which would be responible for formatting requests into messages, sending them to the web service, and retrieving messages from the web service. You would put very little application "logic" in your web service layer.
Now all this is probably overkill and inefficient for what you are wanting to do since you just want something for yourself, but it's the basic anatomy of a lot of webservice applications and would be a good learning exercise. The whole purpose of the layers is decoupling and simplicity. While more layers/components makes the application overall more complex, it means each component is simpler. This means it's easier to wrap your head around problems when you are only dealing with one component which interacts with only a couple other components(the sourounding layer). So there is a careful balance between few components and many components. Too few and they become monolithic and difficult to manage. Too many, and they become intertwined in complex ways. I have heard it said something along the lines of "If a class is getting too big and too complex, then split it up into a few more classes". In essence, don't start subdividing stuff for the heck of it just because it sounds like the right thing to do. Evaluate how complex your component is going to be before deciding if you want to split it up. Sometimes for simple cases your have a layer serving more than one purpose, for the sake of getting it done faster and making the overall design simpler. The point is, apply these concepts where appropriate. You will learn what is appropriate with experience, and you obviously understand that you can learn the most by "doing".
"Can vbscript call a COM component?" You can compile .NET DLLs with COM support. Many older things can call COM dlls.
I googled: vbscript dll
and got this: VB Script and DLLs
"Is that safe to do?" Your webservice will be where you would be most concerned with security. It's safe only if you design with security in mind and don't screw up. We all screw up sometimes though, which means there is no guarantee of it being perfectly secure.

Resources