I am working on a series of different software products. They are quite old, so we're in the process of re-factoring/improving them. My co-worker had the idea of abstracting the GUI and having it run in its own process and communicate with the logical portion of the program via sockets. This will allow us to use the same GUI components with all of the different applications (keeping the same LAF). So my question is: Is this a valid practice for creating a GUI? Would I be better off keeping the GUI tied in with the rest of the program? what are the pros and cons of the different methods, and are there any other methods for implementing GUIs?
Thanks
Yes, it's a perfectly valid way to write a GUI program. This is roughly how web apps work -- the UI (browser) communicates to the business logic server (web server) over a socket.
It's a little bit unusual for a desktop application, but it's quite acceptable. The beauty of this solution is that it lets you write multiple rich clients for different platforms (think mobile app, windows app, browser-based app, etc.)
All you need to do is define the API that a GUI will need to talk to the back end. For example, it will need a way to get objects and save objects, and to receive notifications from the back end that the UI needs updating.
With service and presentation layers properly designed that should be perfectly all right. To summarize pros and cons in my opinion:
Pros:
UI not bound physically to logic, so logic layers can be remote (or even
standalone BL server for several clients). Let's call is "Business logic location independence".
Possibility to create different versions of GUI (and not only graphical - it would be possible to expose BL as a service, for example as a feed or reporting endpoint), "GUI platform independence", and also SOA approach.
Possibility to add a proxy between BL and GUI - for security and caching purpose. Or load balancer in front of application farm. Or an adapter to support "old" clients after significant BL changes. ("Resiliency and fail-safety"?)
Deployment could be easier to some extent (fixing bugs in UI wouldn't affect BL layer - just a consequence of binary module independence)
Ability to add "offline mode" to GUI.
Cons:
You're adding one more connection link, which could be yet another fail point, and some effort should be spent for testing that.
Increase of data traffic between GUI and BL, and probably more serialization work.
Need to track communication protocol changes and proper protocol versions maintenance.
(Negative side of proxy ability) Possibility of man-in-the-middle attack between GUI and BL.
Depends on the type of application.
Desktop applications
It makes sense if the server can be run on a dedicated server. It does not make sense if both the server and the GUI are going to be installed on each desktop (for most applications). Then use different projects/dlls to separate the UI/Business logic.
Web applications
Yes. Many web applications have separate service layer an uses SOAP for communication between the GUI and the service layer.
Sockets
Using vanilla sockets is seldom a good choice today. Why waste energy/time of building your own protocol and implementation when there are several excellent IPC frameworks available.
Update in response to comment
Divide and conquer. Break down the UI into as small components as possible to make them reusable. Put those components into a separate project/dll. A sample component can be a UserTable which presents a list of all users (taking a dependency of the interface IUserService).
Don't try to reuse the entire UI layer since it's doomed to fail. The reason is that if you try to make a UI which should be configurable and generic you'll probably end up spending more time on that than what it would have taken to build a specific UI using reusable components. And in the end, you need to add small "hacks" to make minor changes to the generic UI layer to suit each application. It WILL end up in a emss.
Instead reuse the above mentioned components to build a specific UI for each application.
Related
Due to our domain model and proccess we are looking at sharing model between client and server. Our client is really thick client.
Is there any information about such architecture, pros and cons?
Theoretically if your Domain layer is totally decoupled (from persistence, presentation, infrastructure, etc.) it can easily be reused as a library in different places.
However, as Adrian points out, this raises practical issues :
Security : distributing your domain especially in client applications can be risky. One way around that is to obfuscate your binaries if the client is a desktop app.
Platform mismatch : you might not be able to use the same technology/language on client and server. This will result in a translation of your domain, basically doubling the initial amount of work, maintenance cost and bug proneness.
Versioning : even if the same library is reused, its versions on the client and server must probably be kept in sync to prevent incompatibilities.
Besides, except if you're developing a web version that is an exact clone of a desktop version, I feel that the domain reuse will be partial at best. In the case of a single client/server application, I'm curious to know why you would use the same domain on both tiers... usually what you have on the client side is data structures that might look a bit like the domain entities, but tweaked for the UI, and with no behavior. In that case, reusing the whole domain layer on the client side would mean dragging along a bulky object graph that maybe partly does what you need but also a ton of other unneeded stuff.
Maybe what you need instead is the concept of Bounded Context from Domain Driven Design - same class names but slightly different classes in Client context and Server context.
DDD and modern development practices encourage keeping domain logic out of the client. Most of the code happening in the client-side these days is there to leverage the GUI goodness of the client platform.
Two good reasons for keeping domain logic out of the client are security and maintainability.
For security, the server should regulate what the client can do. The client can be hacked to bits, but if all the domain logic and security are in the server, then no amount of hacking (on the client) can circumvent or damage the system.
For maintainability, if all your domain logic is on the server, then it's in one place. If it's in one place (or better still in a clearly defined module or namespace) then it's easier for anyone on the team to maintain the code.
All,
I'm currently revamping an ancient IVR written using Classic ASP with VXML 2.0. Believe me, it was a mess, largely due to the mixing of routing logic between the ASP code and the VXML logic, featuring multiple postbacks a la ASP.NET. Not fun to debug.
So we're starting fresh with MVC 3 and Razor and so far so good. I've succeeded in moving pretty much all the processing logic to the controller and just letting most of the VXML be just voicing a prompt and waiting for a DTMF reply.
But, looking at a lot of sample VXML code, it's beginning to look like it might actually be simpler to do basic routing using multiple on a page and VXML's built-in DTMF processing and . More complex decision-making and database/server access would call the controller as it does now.
I'm torn between the desire to be strict about where the logic is, versus what might actually be simpler code. My VXML chops are not terribly advanced (I know enough to be dangerous), so I'm soliciting input. Have others used multiple forms on a page? Better or worse?
Thanks
Jim Stanley
Blackboard Connect Inc.
Choosing to use simple VoiceXML and moving the logic server side is a fairly common practice. Pros/Cons below.
Server-side logic
Often difficult to get retry counters to perform the way you want if you are also performing input validation (valid for grammar, but not for host or other validation logic)
Better programming language/toolkits for making logical descriptions (I'm not a fan of JavaScript, but even if you like JavaScript, you tend to have to create a lot of forms to get the flow control you want).
Usually easier to debug. Step through logical decisions and access to logging tools.
Usually easier to create reusable components that use parameters to alter component behavior.
Client side logic
Usually more scalable. VoiceXML browsers tend to use a large amount of their resources compiling and processing pages. One larger page will typically do better than a variety of smaller pages. However, platforms vary significantly and your size may make this negligible.
Better chance of using static pages. Many platforms have highly optimized caches (more than just fetched data). Like above may only matter if you have 100s of ports per device or 1000s of ports hitting a server.
Mixing and matching isn't bad until somebody requests some sort of global behavior change. You may be making the change in multiple places. Debugging techniques will also vary so it may complicate your support paths (e.g. looking in browser logs versus server logs to see what happened on a call).
Our current framework currently uses a mix of server and client. All our logic is in the VoiceXML, and the server is used for state saving and generating recognition components. Unfortunately as all our logic is in the voicexml, it makes it harder to unit test.
Rather than creating a large voicexml page that subdialogs to each question and all the routing done on the clientside, postback to the server after each collection, then work out where to go now. Obviously this has it's pros/cons as Jim pointed out, but the hope is to abstract some of the IVR/callflow from the VoiceXML and reduce the dependency on skilling up developers in VoiceXML.
I'm looking at redeveloping using MVC3, creating different views based on base IVR functions, which can then be modified based on the hosting VoiceXML platform:
Recognition
Prompts
Transfer
CTI Get/Set
Disconnect
What I'm still working out is how to create reusable components within the MVC. Whether to create something we subdialog to and return back the result (similar to how we currently do it), or redirect to a generic controller, and then redirect to the "Completed" action once the controller is done.
Jim Rush provides a pretty good overview of the pros and cons of server side versus client side logic and is pretty consistent with my discussion on this topic in my blog post "Client-side versus Server-side Development of VoiceXML Applications". I believe the pros of putting the logic on the server far outweigh putting it on the client. The VoiceXML User Group is moving towards removing most of this logic from VoiceXML in version 3.0 and suggesting using a new standard called State Chart XML (SCXML) to handle control of the voice application. I have started an open source project to make it easier to develop VoiceXML applications using ASP.NET MVC 3.0 which can be found on CodePlex and is called VoiceModel. There is an example application in this project which will demonstrate a method for keeping the logic server side, which I believe greatly improves reuse of voice objects.
Where would I go to learn write code that had to be very, very secure and that DOES expose external services (running on a standard Windows or Linux OS). Knowing what services can and cannot be safely exposed would be part of the issue. Note that I am not looking for a favorite choice between Linux and Windows, as the choice is not likely to be mine to make in any given case. However the level of security needs to be military grade.
I almost feel embarressed giving this as a for instance, but how would I know whether or not I could use, say, WCF, in such a setting.
High security is a difficult concept as it generally involves way more than just the code you wrote.
Basically every layer of the OSI model has to be taken into consideration. Things like, preventing capture of the data stream (or it being rerouted) between the end points (quantum cryptography).
At the higher levels, you have things like various things like
Physical security of the devices (all endpoints if possible).
Hardening the OS (e.g: closing ports, turning off unused services, using kerberos, VPN tunnels, and leveraging white lists of machines allowed to connect, etc);
Encrypting the data at rest (file encryption), in transmission (SSL), and in memory (column/table encryption).
Ensuring and enforcing proper authentication and authorization at every level (in app, in sql, etc).
Log EVERYTHING. At a minimal it should answer "who/what/when/where/how"
Along with the logging, Actively Monitor it. aka: intrusion detection.
Then we can move on to other things like looking at other attack vectors like sql injection, xss, internal / disgruntled employees, etc.
And once you've done all of that be prepared when a hacker gets away with everything they want simply by social engineering.
In short, the best tact to take in order to secure any computer related application is to listen to the ethos of Fox Mulder, and Trust No One. Another favorite of mine that applies is: It's only paranoia if they aren't after you.
You could use formal methods to (sort-of) prove the critical parts of your software. A tool like Frama-C (free, LGPL license, targetting embedded systems) could be relevant (at least if your software is critical, embedded, written in C).
But military grade don't mean much. Your client will (and should) define exactly the standards to respect. For instance, critical [civilian] aircraft software needs to follow something like DO-178C (or its predecessor, DO-178B). Different industries have different standards similar to that. (both railways and medical industries have their own standards, which might be different in North America than in Europe).
If your system (& client) is less demanding (i.e. no billion dollars or hundreds lives threatened by bugs) you could consider customizing your compiler or using some other tool. For example, GCC is customizable thru plugins or thru MELT extensions.
Don't forget that software reliability has a big price (that means a big cost for you, hence for your client).
Well, the question of where can be answered simply. Not in school. I suggest to create a learning path for yourself. Pick a technology that you like and learn it inside out. A basic book to get you started should suffice, however the rest of the stuff you learn as you go, or via the documentation of that technology.
For instance - learning under .NET (Microsoft) involves a basic A-Press text-book (i suggest Pro C# and The .NET 4.0 Platform). Thereafter searching through the .NET Framework Reference on MSDN will give you the rest.
If you are looking for WCF reference, I suggest the (MCTS Exam 70-503, Microsoft .NET Framework 3.5 Windows Communication Foundation) and MSDN.
Just keep in mind that not a single technology will achieve what you are looking for. For example: WCF co-mingles with WF (Windows Workflow Foundation), as well as SQL Data Services and Entity Framework. Being exposed to multiple technologies will definitely broaden your vision.
===============================================================================
WCF is a beast in this regard. Here are the advantages over some other means of communication:
Messages (data) passed between end points can be secured via message-level security (encryption). The transport channel chosen can also be secured at protocol level via transport layer security (encryption).
End points themselves can authorize and impersonate clients (client level security). You can implement end-to-end service tracing, health monitoring & performance counters, message logging, as well as forward and backward compatibility with newer/older clients (via graceful degradation of the message format, provided in WCF). If you chose to do so, you can even implement routing as fail-safe for your communications channel. WCF also supports transactions (ACID), concurrency, as well as a per-instance throttling, giving you the most flexibility in writing secure/robust military grade code.
In retrospect the security and flexibility of WCF are astonishing. A similiar technology (if not the same) is the WS-Security spec. It is part of the WS-* specifications for web services and deals with Xml signature and Xml encryption to provide secure communications channel between two end points.
The disadvantages of WS-* however is that it is a one-way means of communication. WCF can facilitate 2 way communication. A client can send a request to a server, but also a server can send requests to the client. WS-* dictates that a client can only send and receive responses to the server, but not vice versa.
I am not a WCF developer so i thought the highlights might provoke you into doing your own research. "There are hundreds of ways to skin an animal, neither of them is wrong..."
I have started working in C# for almost a few months and I am looking for something more challenging and interesting. I use a media player called media monkey that supports custom vb scripts, well I made one that writes a file to a dir that has the current song playing, and is updated every time a new song is playing by rewriting what was there before.
Now I want to add this information to a database and keep a record of this and possibly add the information on my home page. I know I can hack a way for it to work, but I want to know what would be the "professional way" of doing things.
I came up with the following and got stuck. I would need an ODBC driver to connect to a database which seems messy, would a web service work? How would that work? Can a VbScript call a dll file to call upon a web service to modify data on a seperate server? Is that safe to do?
Many professional C# apps are n-tier. In your case, you would probably layer it like this:
On the server:
-Database Store
-Database Access/Business layer(sometimes two distinct components, depending on how complex the app is)
-Web Service
On the client:
-Web Service Client
-Any other layers to support client functionality.
So the Database Store would be something like some tables in an Oracle or Microsoft SQL Server, and would on your server.
Database Access/Business layer would be your code that retrieves and stores data to/from your database. It might also contain business objects, which are basically classes that have properties representing your data from your database. The benefit of the data access layer is that sometimes reading and writing to a database can require specialized code, and you don't want that code sprinkled throuought your application. So instead you can call functions in your data access layer that loads needed data into objects, so the rest of your application is just interacting with a regular old .NET object/class. These are called POCOs, which stands for something like Plan Old CLR Object. There are lots of variations on this of course, as people have taken different approaches to the problem of isaloting database access. Also it serves the purpose of minimizing breaking changes whenever the database changes. Since the database access logic is not sprinkled throughout the app, then there are fewer places that need to be updated if the database changes (such as adding new columns to a table or changing a name).
Sometimes the business layer will be it's own layer, and would contain most of the "logic" of the application. It would sit between the data access and web service layers. Using concepts from Service Oriented Architecture (SOA), you might have an authentication service, and a web request handling service. These services are a lot like a class that is always instantiated, there waiting to process requests. Your web request handling service would take a request, and maybe first call into the authentication service to verify credentials before honoring the request. SOA is one of those things I think should be used only when appropriate. It some cases just using Object Oriented techniques will give you the same benefits. Not always though. SOA, when done right, is more scalable, so it really depends on whether SOA offers you additional benefits that you need.
The Webservice would be responsible for receiving requests from the web, parsing/interpreting them, and acting on those requests by making calls into your business layer to update or retrieve data.
So the concept here would be that you could have many users of your service who publish their song updates through your service.
Your client would have a "web service client" layer which would be responible for formatting requests into messages, sending them to the web service, and retrieving messages from the web service. You would put very little application "logic" in your web service layer.
Now all this is probably overkill and inefficient for what you are wanting to do since you just want something for yourself, but it's the basic anatomy of a lot of webservice applications and would be a good learning exercise. The whole purpose of the layers is decoupling and simplicity. While more layers/components makes the application overall more complex, it means each component is simpler. This means it's easier to wrap your head around problems when you are only dealing with one component which interacts with only a couple other components(the sourounding layer). So there is a careful balance between few components and many components. Too few and they become monolithic and difficult to manage. Too many, and they become intertwined in complex ways. I have heard it said something along the lines of "If a class is getting too big and too complex, then split it up into a few more classes". In essence, don't start subdividing stuff for the heck of it just because it sounds like the right thing to do. Evaluate how complex your component is going to be before deciding if you want to split it up. Sometimes for simple cases your have a layer serving more than one purpose, for the sake of getting it done faster and making the overall design simpler. The point is, apply these concepts where appropriate. You will learn what is appropriate with experience, and you obviously understand that you can learn the most by "doing".
"Can vbscript call a COM component?" You can compile .NET DLLs with COM support. Many older things can call COM dlls.
I googled: vbscript dll
and got this: VB Script and DLLs
"Is that safe to do?" Your webservice will be where you would be most concerned with security. It's safe only if you design with security in mind and don't screw up. We all screw up sometimes though, which means there is no guarantee of it being perfectly secure.
We maintain a system that has over a million lines of COBOL code. Does someone have suggestions about how to migrate to a GUI (probably Windows based) without losing all the business logic we have written in COBOL? And yes, some of the business logic is buried inside the current user interface.
If it was me I would look into something like this:
NetCobol for Windows
It should be fairly easy to wrap your COBOL with an interface that exposes the functionality (if it isn't already written that way) and then call it from a .NET application.
It took us about 15 years to get off of our mainframe, because we didn't do something like this.
Writing a screen scraper is probably your best bet. Some of the major ERP systems have done this for years during a transition from server based apps to 3-tier applications. One i have worked with had loads of interesting features such as drop down lists for regularly used fields, date pop ups and even client based macro languages based on the scraping input.
These weren't great but worked well for the clients and made sure the applications still worked in a reliable fashion.
There is a lot of different ways to put this together, but if you put some thought into it you could probably use java or .net to create a desktop based application and with a little extra effort make a web based implementation.
Microfocus provide a tool called Enterprise Server which allows COBOL to interact with web services.
If you have a COBOL program A and another COBOL program B and A calls B via the interface section, the tool allows you to expose B's interface section as a web service.
For program A, you then generate a client proxy and A can now call B via a web service.
Of course, because B now has a web service any other type of program (command line, Windows application, Java, ASP etc.) can now also call it.
Using this approach, you can "nibble away at the edges" to move the GUI to a modern, browser based approach using something like ASP while still utilising the COBOL business engine.
And once you have a decent set of web services, these can be used for any new development which provides a way of moving away from COBOL in the longer term.
You could use an ESB to expose the back-end legacy services, and then code your GUI to invoke the services via the ESB.
Then you can begin replacing the legacy services with implementations on your new platform of choice.
The GUI need not be aware of the cut-over of back-end service implementation, as long as the interface to the service does not change - minor changes may hidden from the GUI by the ESB.
Business logic that resides in the legacy user interface layer will need to be refactored by extracting the business logic and exposing it as new services on the new platform to be consumed by the new GUI via the ESB.
As for the choice of platform for the new GUI, why not consider a web-based UI rather than a native windows platform, then at least updates to the UI will only need to be applied to the web-server rather than having to roll-out changes to each individual work-station.