Anything wrong with moving CLI validation/logic server-side? - client-server

I have a client/server application. One of the clients is a CLI. The CLI performs some basic validation then makes SOAP requests to a server. The response is interpreted and relevant information presented to the user. Every command involved a request to a web service.
Every time services are modified server-side, a new CLI needs to released.
What I'm wondering is if there would be anything wrong with making my CLI incredibly thin. All it would do is send the command string to the server where it would be validated, interpreted and a response string returned.
(Even TAB completion could be done with the server's cooperation.)
I feel in my case this would simplify development and reduce maintenance work.
Are there pitfalls I am overlooking?
UPDATE
Scalability issues are not a high priority.

I think this is really just a matter of taste. The validation has to happen somewhere; you're just trading off complexity in your client for the same amount of complexity in your software. That's not necessarily a bad thing for your architecture; you're really just providing an additional service that gives callers an alternate means of accessing your existing services. The only pitfall I'd look out for is code duplication; if you find that your CLI validation is doing the same things as some of your services (parsing numbers, for example), then refactor to avoid the duplication.

in general, you'd be okay, but client-side validation is a good way to reduce your workload if bad requests can be rejected early.

What I'm wondering is if there would be anything wrong with making my CLI incredibly thin.
...
I feel in my case this would simplify development and reduce maintenance work.
People have been doing this for years using telnet/SSH for remoting a CLI that runs on the server. If all the intelligence must be on the server anyway, there might be no reason to have your CLI be a distributed client with intelligence. Just have it be a terminal session - if you can get away with using SSH, that's what I'd do - then the client piece is done once (or possibly just an off-the-shelf bit of software) and all the maintenance and upgrades happen on the server (welcome to 1978).
Of course this only really applies if there really is no requirement for the client to be intelligent (which sounds like the case in your situation).

Using name / value pairs in a request string is actually pretty prevalant. However, at that point, why bother with SOAP at all? Instead just move to a RESTful architecture?

Related

How can I test/send multiple (fake) ajax-requests at once to a (node.js) server?

At a certain point your (node.js) app works good with your single requests, and you would like to see what happens if fifty people use it at the same time. What will happen to the memory usage? What happens to the overall speed of the response?
I reckon this kind of testing is done a lot, so I was thinking there might be a relatively easy helper program for that.
By relatively easy I mean something convenient like POSTman - REST client is for single request and response testing.
What is your recommended (or favorite) method of testing this?
We use http://jmeter.apache.org/ , free and powerful ... you can set test use cases and run them

Verifying clients when using interprocess communication

I'm building an application that will provide a service to other applications (let's pretend like it solves differential equations). So my DifEq service will be running all the time and a client application can send it requests to solve DifEqs at any point.
This would be trivial using sockets or pipes.
The problem is some applications nefariously want to send linear equations instead of differential equations, so I want to register applications that I know are sending proper DifEqs to my application.
Traditional sockets break down here, as far as I know.
Ideally, I'd like to be able to look at some information about the application that is making a request of me and (either through some meta-data on that application, through communication with my web site, or through some other, unkown method) determine it is an acceptable DifEq app. Furthermore, this ideal method would not be spoofable without a root/admin-level compromise of the underlying OS. If the linear equation app is also a root kit, I'll concede to being broken. :)
I need to be able to do this on Windows, OS X, and Linux (and maybe Android); but I recognize that it may not be the same solution on all platforms. So, how would you accomplish this (specify the platform you are focusing on, if appropriate)? I've done a lot of server-side development, but it's been way too many years since I've done any client-side development outside the browser and the world is very different today than it was then.
I think your question is a little confusing when it comes to talking about DifEQ vs LinearEQ.
It sounds to me like you are just looking for a routine way to verify that clients are authorized to connect. There is a lot to read up on this subject. Common methods would be to use SSL certificates to verify the identity of clients. You can also tunnel over SSH, or use OAUTH, etc, etc.
You'll have to do some more digging around the web to see what kind of authentication fits your scenario. You mention 'not spoofable'. I think that people generally end up compiling-in a certificate of private key into their application. This will stop all but the very dedicated and experienced hackers.

How to most quickly get small, very frequent updates from a server?

I'm working on the design of a web app which will be using AJAX to communicate with a server on an embedded device. But for one feature, the client will need to get very frequent updates (>10 per second), as close to real time as possible, for an extended period of time. Meanwhile typical AJAX requests will need to be handled from time to time.
Some considerations unique to this project:
This data will be very small, probably no more than a single numeric value.
There will only be 1 client connected to the server at a time, so scaling is not an issue.
The client and server will reside on the same local network, so the connection will be fast and reliable.
The app will be designed for Android devices, so we can take advantage of any platform-specific browser features.
The backend will most likely be implemented in Python using WSGI on Apache or lighttpd, but that is still open for discussion.
I'm looking into Comet techniques including XHL long polling and hidden iframe but I'm pretty new to web development and I don't know what kind of performance we can expect. The server shouldn't have any problem preparing the data, it's just a matter of pushing it out to the client as quickly as possible. Is 10 updates per second an unreasonable expectation for any of the Comet techniques, or even regular AJAX polling? Or is there another method you would suggest?
I realize this is ultimately going to take some prototyping, but if someone can give me a ball-park estimate or better yet specific technologies (client and server side) that would provide the best performance in this case, that would be a great help.
You may want to consider WebSockets. That way you wouldn't have to poll, you would receive data directly from your server. I'm not sure what server implementations are available at this point since it's still a pretty new technology, but I found a blog post about a library for WebSockets on Android:
http://anismiles.wordpress.com/2011/02/03/websocket-support-in-android%E2%80%99s-phonegap-apps/
For a Python back end, you might want to look into Twisted. I would also recommend the WebSocket approach, but failing that, and since you seem to be focused on a browser client, I would default to HTTP Streaming rather than polling or long-polls. This jQuery Plugin implements an http streaming Ajax client and claims specifically to support Twisted.
I am not sure if this would be helpful at all but you may want to try Comet style ajax
http://ajaxian.com/archives/comet-a-new-approach-to-ajax-applications

SOA service calling back a client

This is more a theoretical question than a practical one, but given I undestand the principles of SOA I am still a bit unsure about if this can be applied to any app.
The usual example is where a client wants to know something from a server thus we implement a service that can provide that information given a client request, it can be stateless or statefull, etc.
But what happens when we want to be notified when something happens on the server, maybe we call a service to register a search and want to be notified when a new item arrives to the server that matches or search.
Of course that can be implemented using polling and leverage that using long timeouts, but I can not see a way in the usual protocols to receive events from the server without making a call to ask.
If you can point me to an example, or tell me an architecture that could support then you have made my day.
Have you considered pub-sub (ie; WS-Eventing, WS-Notification)? These are the usual means to pushing "stuff" to interested consumers/subscribers.
You want to use a Publish-Subscribe design. If you are using WCF checkout Programming WCF by Juval Lowery. In the appdendix he shows how to build a Pub-Sub system that is actually fully Per-Call. It doesn't even rely on CallbackContracts and keeping long running Channels open and so doesn't require any reconnection logic when communication is broken...let alone the need for any polling.

Web Service versus regular Http Request

About 5000 computers will be making a call to a central server, and they will be passing in a GUID to the central server.
The server will then return True/False back to the client.
Is there a big difference in performance between a web service and a regular Http request to a Url on the server?
Yeah, a SOAP envelope is relatively hefty. I'd suggest a REST-based service that will keep the size of data being marshaled around to a minimum.
I assume by Web Serivce you mean SOAP. My experience with various SOAP stacks on different platforms (Java, .NET, Ruby, PHP) is that in this case you're probably looking at an order of magnitude difference in processing such a simple message. There's usually a good deal of overhead with SOAP, which is negligible if you're passing large messages, but overkill for small messages. Using SOAP for this is like an elephant carrying a penny. I would recommend just a simple HTTP handler in this case.
Are all 5000 clients going to be hitting the server at one time? Do you need to guarantee a certain response time?
REST web services are HTTP.
Consequently, I don't understand the question. Perhaps you should provide more information on the protocol, the messages, whether it's RPC-style or document-style, how big the document is, etc.
I am not 100% sure on a performance benefit in response time, but a WebService request that returns just the true false, rather than a regular http request and then parsing the response I'm guessing would be more efficient overall.
I have an app that currently has about 7000 machines calling a .net web service using WCF.
Each machine makes the call once every 15 minutes. The service takes the data and shoves it into SQL server; which is installed on the same box.
Right now it's collecting about 350MB of data a day.
The load is barely registering and we're in the process of rolling it out to 25,000 clients.
I guess my point is that passing a guid to the server with a true / false value coming back is not a whole lot of load to be worried about unless the web server was a POS 5 years ago.
I would think the difference wouldn't much if any difference. The HttpRequest may actually be faster just because its using one less layer in the stack. If you see yourself expanding the services in the future, you might go ahead and use WebSerivce, not because of performance (again the performance difference is probably negligible), but because the WebService is going to be more maintainable as services get more complex.
Realistically it won't make much difference. Out of the things that could introduce latency in this request you have:
Compiled code executing
Network round-trips
Database access (presumably that's what you'll be checking against?)
Even if you have a blindingly fast network and database server, the amount of time spent doing the the network round trip and database access (which possibly may involve another network round trip) will render the overhead of executing the compiled code of whatever web service framework you use insignificant.
SOAP and REST Web Services imply some overhead, but are certainly the way to go if you think you'll need to scale up to returning some other information other than true/false.
An HTTP response of just 1/0 or true/false will be much smaller, and therefore theoretically faster.

Resources