How can I call a webmethod of one application from another application, when both are developed in C#?
You can't do this, of course. It would be a huge security hole.
As suggested by your tag, it would be necessary for the developer of an application to explicitly expose to the world the methods he wants to be called from other applications. This could be done through WCF, or possibly through COM.
Alternatively, the code to be called caould be placed into a class library, and referenced by both projects.
Expose the method through SOAP or REST or COM or (going old-school) CORBA or ...
Be prepared that doing this is a massive increase in the complexity of the applications. You start to have to worry properly about security, and about how all the pieces interact, and many other issues. There's a lot of depth here, far too much for a simple answer.
This can be done using WCF instead of webservice
Related
How third party client would use my api methods who has no knowledge of my DTOs (the objects web service returns or takes as parameter). Do i need to expose my DTO's somehow?
Documentation is your friend here. Publish some docs showing what the DTOs should be. If you know your clients, you could create packages that contain the proper DTOs. We did this for our .NET clients. We published a portable class library to nuget so any of these clients could download the package and use them. However, we have since stopped because this may overwhelm the client app developer. I.e. Let's say you have 100 DTOs, but a simple client app really only needs 5 of them. By including the package, there are now so many options that it might be confusing to know which DTO's to actually use and this leads to the client app maybe doing more than it should. We like to keep our client apps lean by only using DTOs that it needs. Yes, there is a little DTO definition duplication.
On the flip side, if you went the package route, you could essentially build up an SDK for using your API. You'll see Microsoft do this a lot to help with complexity of areas such as Azure Storage or Azure Service Bus. All of these have backing REST APIs, but the SDK ensures it's used in the designed and possibly the most optimized way.
I am trying to figure out the best way to go about doing this: I am working on a project and I'm putting all my data access layer code into .ASMX files to keep them separated from my presentation layer. I am calling all my methods from the code behind and using the web services like class files. I am following this practice based on one other developer's work. Two opinions on this so far: One says when the code-behind calls the method from the web service, it's a performance hit because it has to go do an HTTP request and the other says, no performance hit. The ASMX files are within the same project on the same server. Is there indeed a performance hit or not really? I tend to think not.
Any help or opinion on this would be appreciated.
If you call as a web service, you still have to go through the proxy and argument marshalling even if you are calling within the same server; there is a performance hit compared to calling the same class directly; the call overhead may be orders of magnitude higher. You wouldn't want to do this if the called method isn't doing some substantial work.
I am planning a new application and have been experimenting with GWT as a possible frontend. The design question I am facing is this.
Should I use
Option A: GWT-RPC and build the app quickly
Option B: Build a REST backend using Spring MVC 3.0 with all the great #Controller, #Service, #Repository annotations and build a client side library to talk to the backend using the GWT overlay features and the GWT Request builder?
I am interested in all the pros and cons and people experiences with this type of design?
Ask yourself the question: "Will I need to reuse the server-side interface with a non-GWT front-end?"
If the answer is "no, I'll just have a GWT client": You can use GWT-RPC, and take advantage of the fact that you can use your Java objects both on the server and the client-side. This can also make the communication a bit more efficient, at least when used with <inherits name="com.google.gwt.user.RemoteServiceObfuscateTypeNames" />, which shortens the type names to small numeric values. You'll also get the advantage of better error handling (using Exceptions), type safety, etc.
If the answer is "yes, I'll make my service accessible for multiple kinds of front-ends": You can use REST with JSON (or XML), which can also be understood by non-GWT clients. In addition to switching clients, this would also allow you to switch to a different server implementation (maybe non-Java) in the future more easily. The disadvantage is, that you'll probably have to write wrappers (JavaScript Overlay Types) or transformation code on the GWT client side to build nice Java objects from the JSON objects. You'll have to be especially careful when you deploy a new version of the service, which brings us back to the lack of type safety.
The third option of course would be to build both. I'd choose this option, if the public REST interface should be different from the GWT-RPC interface anyway - maybe providing just a subset of easy to use services.
You can do both if use also use the RestyGWT project. It will make calling REST based JSON resources as easy as using GWT-RPC. Plus you can typically reuse the same request response DTOs from the server side on the client side.
We ran into the same issue when we created the Spiffy UI Framework. We chose REST and I would never go back. I'd even say GWT-RPC is a GWT Anti-pattern.
REST is a good idea even if you never intend to expose your REST endpoints. Creating a REST API will make your UI faster, your API better, and your entire application more maintainable.
I would say build a REST backend. In my last project we started by developing using GWT-RPC for the first few months, we wanted fast bootstrapping. Later on, when we needed the REST API, it was so expensive to do the refactoring we ended up with two backend APIs (REST and RPC)
If you build a proper REST backend, and a deserialization infra on the client side (to transform the json\xml to GWT Java objects) then the benefit of the RPC is almost nothing.
Another sometimes forgotten advantage of the REST approach is that it's more natural to the browser running the client, RPC is a propitiatory protocol, where all the requests are using POST. You can benefit from client side caching when reading resources in the standard way.
Answering ams comments:
Regarding the RPC protocol, last time I "sniffed" it using firebug it didn't look like json, so I don't know about that. Though, even if it is json based, it still uses only the HTTP POST method to communicate with the server, so my point here about caching is still valid, the browser won't cache POST requests.
Regarding the retrospective and what could have done better, writing the RPC service in a resource oriented architecture could lead later to easier porting to REST. remember that in REST one usually exposes resources with the basic CRUD operations, if you focus on that approach when writing the RPC service then you should be fine.
The REST architectural style promotes inspectable messages (which aids debugging and security), API evolution, multiple platforms, simple interfaces, failure recovery, high scalability, and (optionally) extensible systems via code on demand. It trades per-interaction performance for overall network efficiency. It reduces the server's control over consistent application behavior.
The "RPC style" (as we speak of it in opposition to REST) promotes platform uniformity, interface variability, code generation (and thereby the ability to pretend the network doesn't exist, but see the Fallacies), and customized interactions. It trades overall network efficiency for high per-interaction performance. It increases the server's control over consistent application behavior.
If your application desires the former qualities, use the REST style. If it desires the latter, use the RPC style.
If you're planning on using using Hibernate/JPA on the server-side and sending the resulting POJO's with relational data in them to the client (ie. an Employee object with a collection of Phones), definitely go with the REST implementation.
I started my GWT project a month ago using GWT RPC. All was well until I tried to serialize an object from the underlying db with a One-To-Many relationship in it. And got the dreaded:
com.google.gwt.user.client.rpc.SerializationException: Type 'org.hibernate.collection.PersistentList' was not included in the set of types which can be serialized by this SerializationPolicy
If you encounter this and want to stay with GWT RPC you will have to use something like:
GWT Request Factory (www.gwtproject.org/doc/latest/DevGuideRequestFactory.html) - which forces you to write 3+ classes/interfaces per POJO you want to share with the client. OUCH!
Gilead (sourceforge.net/projects/gilead/) - which appears to a dead project.
I'm now using RestyGWT. The switch was fairly painless and my POJO's serialize without issue.
I would say that this depends on the scope of your total application. If your backend should be used by other clients, needs to be extendable etc. then create a separate module using REST. If the backend is to be used by only this client, then go for the GWT-RPC solution.
I want to consume a file which is wsdl with VB6 , anyone can help me? Or how can I convert wsdl to proxy class ?
You can look at either Microsoft's SOAP Toolkit or PocketSOAP. Might be best to look at both, but don't despair over the learning curve. Both offer simple approaches for simple situations as well as complex solutions for more complex ones.
First, you have a problem with terminology: you do not want to consume the file. The file is a description of a web service. It is the web service that you want to consume. The WSDL gives you all the information you need to consume it.
There are methods to consume a web service in VB6. In the same way you shouldn't be using VB6, you shouldn't be using any of these methods.
You should use VB.NET to create a small COM component. This component will consume the service by using "Add Service Reference" to create proxy classes. You will be able to use modern tools and techniques to develop and debug this component.
You can then consume the COM component from VB6, just like any other COM component.
Basicly you can use the SOAP moniker like this
Set oProxy = GetObject("soap:wsdl=http://server/folder/service.wsdl")
oProxy.Method "Param1"
You can check out the answers to What is the best way to consume a web service from VB6?
I have started working in C# for almost a few months and I am looking for something more challenging and interesting. I use a media player called media monkey that supports custom vb scripts, well I made one that writes a file to a dir that has the current song playing, and is updated every time a new song is playing by rewriting what was there before.
Now I want to add this information to a database and keep a record of this and possibly add the information on my home page. I know I can hack a way for it to work, but I want to know what would be the "professional way" of doing things.
I came up with the following and got stuck. I would need an ODBC driver to connect to a database which seems messy, would a web service work? How would that work? Can a VbScript call a dll file to call upon a web service to modify data on a seperate server? Is that safe to do?
Many professional C# apps are n-tier. In your case, you would probably layer it like this:
On the server:
-Database Store
-Database Access/Business layer(sometimes two distinct components, depending on how complex the app is)
-Web Service
On the client:
-Web Service Client
-Any other layers to support client functionality.
So the Database Store would be something like some tables in an Oracle or Microsoft SQL Server, and would on your server.
Database Access/Business layer would be your code that retrieves and stores data to/from your database. It might also contain business objects, which are basically classes that have properties representing your data from your database. The benefit of the data access layer is that sometimes reading and writing to a database can require specialized code, and you don't want that code sprinkled throuought your application. So instead you can call functions in your data access layer that loads needed data into objects, so the rest of your application is just interacting with a regular old .NET object/class. These are called POCOs, which stands for something like Plan Old CLR Object. There are lots of variations on this of course, as people have taken different approaches to the problem of isaloting database access. Also it serves the purpose of minimizing breaking changes whenever the database changes. Since the database access logic is not sprinkled throughout the app, then there are fewer places that need to be updated if the database changes (such as adding new columns to a table or changing a name).
Sometimes the business layer will be it's own layer, and would contain most of the "logic" of the application. It would sit between the data access and web service layers. Using concepts from Service Oriented Architecture (SOA), you might have an authentication service, and a web request handling service. These services are a lot like a class that is always instantiated, there waiting to process requests. Your web request handling service would take a request, and maybe first call into the authentication service to verify credentials before honoring the request. SOA is one of those things I think should be used only when appropriate. It some cases just using Object Oriented techniques will give you the same benefits. Not always though. SOA, when done right, is more scalable, so it really depends on whether SOA offers you additional benefits that you need.
The Webservice would be responsible for receiving requests from the web, parsing/interpreting them, and acting on those requests by making calls into your business layer to update or retrieve data.
So the concept here would be that you could have many users of your service who publish their song updates through your service.
Your client would have a "web service client" layer which would be responible for formatting requests into messages, sending them to the web service, and retrieving messages from the web service. You would put very little application "logic" in your web service layer.
Now all this is probably overkill and inefficient for what you are wanting to do since you just want something for yourself, but it's the basic anatomy of a lot of webservice applications and would be a good learning exercise. The whole purpose of the layers is decoupling and simplicity. While more layers/components makes the application overall more complex, it means each component is simpler. This means it's easier to wrap your head around problems when you are only dealing with one component which interacts with only a couple other components(the sourounding layer). So there is a careful balance between few components and many components. Too few and they become monolithic and difficult to manage. Too many, and they become intertwined in complex ways. I have heard it said something along the lines of "If a class is getting too big and too complex, then split it up into a few more classes". In essence, don't start subdividing stuff for the heck of it just because it sounds like the right thing to do. Evaluate how complex your component is going to be before deciding if you want to split it up. Sometimes for simple cases your have a layer serving more than one purpose, for the sake of getting it done faster and making the overall design simpler. The point is, apply these concepts where appropriate. You will learn what is appropriate with experience, and you obviously understand that you can learn the most by "doing".
"Can vbscript call a COM component?" You can compile .NET DLLs with COM support. Many older things can call COM dlls.
I googled: vbscript dll
and got this: VB Script and DLLs
"Is that safe to do?" Your webservice will be where you would be most concerned with security. It's safe only if you design with security in mind and don't screw up. We all screw up sometimes though, which means there is no guarantee of it being perfectly secure.