Migrating to a GUI without losing business logic written in COBOL - user-interface

We maintain a system that has over a million lines of COBOL code. Does someone have suggestions about how to migrate to a GUI (probably Windows based) without losing all the business logic we have written in COBOL? And yes, some of the business logic is buried inside the current user interface.

If it was me I would look into something like this:
NetCobol for Windows
It should be fairly easy to wrap your COBOL with an interface that exposes the functionality (if it isn't already written that way) and then call it from a .NET application.
It took us about 15 years to get off of our mainframe, because we didn't do something like this.

Writing a screen scraper is probably your best bet. Some of the major ERP systems have done this for years during a transition from server based apps to 3-tier applications. One i have worked with had loads of interesting features such as drop down lists for regularly used fields, date pop ups and even client based macro languages based on the scraping input.
These weren't great but worked well for the clients and made sure the applications still worked in a reliable fashion.
There is a lot of different ways to put this together, but if you put some thought into it you could probably use java or .net to create a desktop based application and with a little extra effort make a web based implementation.

Microfocus provide a tool called Enterprise Server which allows COBOL to interact with web services.
If you have a COBOL program A and another COBOL program B and A calls B via the interface section, the tool allows you to expose B's interface section as a web service.
For program A, you then generate a client proxy and A can now call B via a web service.
Of course, because B now has a web service any other type of program (command line, Windows application, Java, ASP etc.) can now also call it.
Using this approach, you can "nibble away at the edges" to move the GUI to a modern, browser based approach using something like ASP while still utilising the COBOL business engine.
And once you have a decent set of web services, these can be used for any new development which provides a way of moving away from COBOL in the longer term.

You could use an ESB to expose the back-end legacy services, and then code your GUI to invoke the services via the ESB.
Then you can begin replacing the legacy services with implementations on your new platform of choice.
The GUI need not be aware of the cut-over of back-end service implementation, as long as the interface to the service does not change - minor changes may hidden from the GUI by the ESB.
Business logic that resides in the legacy user interface layer will need to be refactored by extracting the business logic and exposing it as new services on the new platform to be consumed by the new GUI via the ESB.
As for the choice of platform for the new GUI, why not consider a web-based UI rather than a native windows platform, then at least updates to the UI will only need to be applied to the web-server rather than having to roll-out changes to each individual work-station.

Related

Ways of communications between Chromium container and VB application

We have a traditional VB application which are used for Organization operations. Now we are building a Hybrid application developed by using HTML5,CSS and Javascript which is targeted on Google Chromium desktop container. Now we are planning to provide a way to exchange large data like employees records between both of these 2 applications. Now my specific question is
What are the different ways to achieve communication between Chromium desktop container and VB application to exchange large chunks of data?
Sounds a bit painful no matter what.
Chrome Apps Architecture
All external processes are isolated from the app.
This would seem to suggest the obvious course is to use cloud data services, whether on public or private clouds.
I suspect that for political as well as practical reasons no cloud vendor goes to the trouble to provide VB/VBA-friendly APIs for their services. Mainly nobody wants to deal with support issues from the teeming hordes of casual coders the VB community is saddled with.
The VB6 community hasn't stepped up and taken care of this themselves either.
If you can limp along with the burdens of ".Net Inter Clop" (the usual MS answer) that might be a way to exploit existing API implementations.
Otherwise you might roll your own cloud. I see a few obvious services you'd want to implement in your cloud with lightweight APIs easily implemented in both of your development ecosystems:
Bulk Storage. I suggest WebDAV, which IIS supports. If you eschew the locking features then WebDAV API implementations are pretty easy in both JS and VB. Or buy (or scrounge open source) implementations of a more complete WebDAV client library.
DBMS. Pick any, implement a simple REST-like XML over HTTP API. Relatively easy to implement.
Push Notifications. I'd write a custom service accepting long-duration TCP connections from all clients, and with protocols and workflow à la Amazon SNS or Google Cloud Messaging. Such a service would be generally light in resource consumption but you'd probably want a dedicated box with OS tweaks to support a large number of active TCP connections.
Maybe optionally a message queue service?
Nothing novel here, these are all well established patterns.
All of the tools to do that are pretty off-the-shelf whether you want your cloud servers to be based on Windows, Linux, or generically Java anywhere.
Most of the effort will probably go into developing a consistent authentication model, access control model, and of course an integrated administration interface, monitoring, and logging to help keep operating overhead low and uptime high. Well, that and developer docs and training.
Ok, still a lot of work. Too bad there isn't a "cloud in the box" with the API libraries you'd need that you can buy off the shelf today.
Or perhaps I'm missing something obvious?

What is the proper way to make a GUI

I am working on a series of different software products. They are quite old, so we're in the process of re-factoring/improving them. My co-worker had the idea of abstracting the GUI and having it run in its own process and communicate with the logical portion of the program via sockets. This will allow us to use the same GUI components with all of the different applications (keeping the same LAF). So my question is: Is this a valid practice for creating a GUI? Would I be better off keeping the GUI tied in with the rest of the program? what are the pros and cons of the different methods, and are there any other methods for implementing GUIs?
Thanks
Yes, it's a perfectly valid way to write a GUI program. This is roughly how web apps work -- the UI (browser) communicates to the business logic server (web server) over a socket.
It's a little bit unusual for a desktop application, but it's quite acceptable. The beauty of this solution is that it lets you write multiple rich clients for different platforms (think mobile app, windows app, browser-based app, etc.)
All you need to do is define the API that a GUI will need to talk to the back end. For example, it will need a way to get objects and save objects, and to receive notifications from the back end that the UI needs updating.
With service and presentation layers properly designed that should be perfectly all right. To summarize pros and cons in my opinion:
Pros:
UI not bound physically to logic, so logic layers can be remote (or even
standalone BL server for several clients). Let's call is "Business logic location independence".
Possibility to create different versions of GUI (and not only graphical - it would be possible to expose BL as a service, for example as a feed or reporting endpoint), "GUI platform independence", and also SOA approach.
Possibility to add a proxy between BL and GUI - for security and caching purpose. Or load balancer in front of application farm. Or an adapter to support "old" clients after significant BL changes. ("Resiliency and fail-safety"?)
Deployment could be easier to some extent (fixing bugs in UI wouldn't affect BL layer - just a consequence of binary module independence)
Ability to add "offline mode" to GUI.
Cons:
You're adding one more connection link, which could be yet another fail point, and some effort should be spent for testing that.
Increase of data traffic between GUI and BL, and probably more serialization work.
Need to track communication protocol changes and proper protocol versions maintenance.
(Negative side of proxy ability) Possibility of man-in-the-middle attack between GUI and BL.
Depends on the type of application.
Desktop applications
It makes sense if the server can be run on a dedicated server. It does not make sense if both the server and the GUI are going to be installed on each desktop (for most applications). Then use different projects/dlls to separate the UI/Business logic.
Web applications
Yes. Many web applications have separate service layer an uses SOAP for communication between the GUI and the service layer.
Sockets
Using vanilla sockets is seldom a good choice today. Why waste energy/time of building your own protocol and implementation when there are several excellent IPC frameworks available.
Update in response to comment
Divide and conquer. Break down the UI into as small components as possible to make them reusable. Put those components into a separate project/dll. A sample component can be a UserTable which presents a list of all users (taking a dependency of the interface IUserService).
Don't try to reuse the entire UI layer since it's doomed to fail. The reason is that if you try to make a UI which should be configurable and generic you'll probably end up spending more time on that than what it would have taken to build a specific UI using reusable components. And in the end, you need to add small "hacks" to make minor changes to the generic UI layer to suit each application. It WILL end up in a emss.
Instead reuse the above mentioned components to build a specific UI for each application.

What is the development environment for TIBCO Business Works?

I see all these job posts for TIBCO developer but from tibco.com I couldn't really dig what a developer does codewise on this platform because that is geared more towards endusers. Is it a JAVA based platform?
I'll assume that you are talking about TIBCO Business Works as this is where the majority of the development is done.
TIBCO Business Works is a Java based platform, however normally very little development is done in Java. At it's heart TIBCO Business Works is a XSLT processing engine with lots (and I mean lots) of connectivity components (called Starters and Activities in the TIBCO world).
Development is done graphically by linking the Starter to Activities and eventually to a End Activity, very much like a traditional process diagram. You can see what I mean in the top right of this screen shot:
Each of these diagrams is called a Process Definition and the closest equivalent in Java is a method, however they are more closely related to C functions as there is no concept of a Class for Process Definitions.
Looking closely, you'll notice that the StorePO Publish To Adapter Activity is selected. In the bottom right you can see the input to this activity is "mapped" from other process data (which can be either the output from the Start, or the output from other activities). This mapping is actually XSLT, just represented visually. So much so, that copying the root node of the mapping ("body" in this case) into a text document pastes as XSLT (you can even edit it there and copy it back if you are so inclined; good for when you need to do a search and replace).
Looking back at the Process Definition, there is a CheckInventory Call Process Activity. This is how you invoke another Process Definition from the one you are working on. In fact, this Process Definition has a plain Start Activity, which indicates that it it invoked from another Process Definition.
Starter processes are Process Definitions that have a Process Starter instead of a Start Activity. The Process Starter triggers the invocation of the Process Definition based on some event. For instance, a JMS Queue Receiver Process Starter, will trigger when it receives a specific JMS message. There are many such Process Starters, including SOAP, HTTP, SMTP and even plain old TCP.
Likewise the are many Activities, including the ones above and JDBC and FTP.
Without actually having access to TIBCO Designer, the best way to beef up your skills for a TIBCO role is to focus on XPath and XSLT as that's mostly what you'll be working with.
TIBCO AMX Business works is a Java platform use for integration and automation purposes. It uses a plug in based architecture which means that you can extend the functionality. The product has changed from their 5.x version to 6.4.x version now to include micro services capabilities, containerization, cloud enablement, etc.
It uses a model driven development approach to reduce coding parts, that is why is so powerful.
You can find more information on the documentation official siteDocumentation TIBCO AMX BW
If you know spanish and want to learn about the 5.x version I have a set of video tutorials at TIBCO AMX BW Tutorials

Why bother with multi-layer RIA if Internet now is fast enougth to do "traditional" fat client C/S?

Why bother with multi-layer RIA if Internet now is fast enougth to do "traditional" fat client C/S?
What just use a plain C++ / Delphi / Oracle Forms / JAVA-Swing application talking directly to RDBMS thru Internet?
A very complex compiled exe program in Delphi is about 10MB, that amount of code downloads in a couple of minutes in a decent 1MB ADSL connection.
After all what is what we are doing with AJAX / BlazeDS / JSON / etc pushing thru http/https protocol but with a lot of layers and a lot of points of failure...
Comments please...
First a bit about terminology, what you refer as "traditional fat clients" are probably desktop software. Web applications are often written as thin clients, but they can also be written as fat clients. A fat client rich internet application are client centric, which means that a lot of the work is done in the client (browser). Fat client RIAs can be written with the help of technologies such as AJAX or Adobe Flash.
To compare the advantages of web based applications over desktop software:
Maintainability: One of the advantages of web based applications is the maintainability of them. You only have to make one installation of the application and then it is directly available for all users. Same goes for updating of the software, you only need to update the software on the server and then you can be sure that every single user is using the latest version of the software. This eliminates the need to update individual installments of the application on the users' computers.
Security: There are two positive security implication in using web based application. As said previously, you only need to update the software in one place. This means that the users always have the most up-to-date version of the software in use, thus eliminating the problem of people using outdated, vulnerable version of the application.
What is more important, is that fat client applications are insecure. They expose application logic and possibly sensitive data such as database credentials. Fat clients can be reverse engineered and attacks can be crafted based on the gained information. For an application to be truly secure, the application logic should stay on the server and the client should be thin and only server as a presentation layer for the information handled in the application. Do remember the exposure of application logic can also affect rich internet applications. It is easy to write RIA in a way that it exposes application logic. Hence it is important to remember that the application's state should always stay on the server, the browser is, as said, only means for presenting the data. In other words, both web based applications and desktop applications can be (in)secure, I'd just say that there is a greater risk of pushing application logic to the client when writing desktop software.
Platform independent: Web based applications are platform independent (with the exception in application that use platform specific functionality, such as activex). This means that your users can be using the application from a mac, a windows or a linux computer, it doesn't matter. Of course, it is unfortunately easy to create web applications that do not work/only works on specific browsers, such as Internet Explorer. Although, it is much easier to make a web application cross-browser compatible than to write a desktop software to be truly cross-platform compatible.
Accessability: If you are connected to the Internet/Intranet, you have access to the application. It doesn't matter if you have borrowed your friend's laptop or if you are sitting by your desktop computer, you still have access to the application since it doesn't require you to install anything on the computer. Just browse to the application URL.

Starting out, any suggestions?

I have started working in C# for almost a few months and I am looking for something more challenging and interesting. I use a media player called media monkey that supports custom vb scripts, well I made one that writes a file to a dir that has the current song playing, and is updated every time a new song is playing by rewriting what was there before.
Now I want to add this information to a database and keep a record of this and possibly add the information on my home page. I know I can hack a way for it to work, but I want to know what would be the "professional way" of doing things.
I came up with the following and got stuck. I would need an ODBC driver to connect to a database which seems messy, would a web service work? How would that work? Can a VbScript call a dll file to call upon a web service to modify data on a seperate server? Is that safe to do?
Many professional C# apps are n-tier. In your case, you would probably layer it like this:
On the server:
-Database Store
-Database Access/Business layer(sometimes two distinct components, depending on how complex the app is)
-Web Service
On the client:
-Web Service Client
-Any other layers to support client functionality.
So the Database Store would be something like some tables in an Oracle or Microsoft SQL Server, and would on your server.
Database Access/Business layer would be your code that retrieves and stores data to/from your database. It might also contain business objects, which are basically classes that have properties representing your data from your database. The benefit of the data access layer is that sometimes reading and writing to a database can require specialized code, and you don't want that code sprinkled throuought your application. So instead you can call functions in your data access layer that loads needed data into objects, so the rest of your application is just interacting with a regular old .NET object/class. These are called POCOs, which stands for something like Plan Old CLR Object. There are lots of variations on this of course, as people have taken different approaches to the problem of isaloting database access. Also it serves the purpose of minimizing breaking changes whenever the database changes. Since the database access logic is not sprinkled throughout the app, then there are fewer places that need to be updated if the database changes (such as adding new columns to a table or changing a name).
Sometimes the business layer will be it's own layer, and would contain most of the "logic" of the application. It would sit between the data access and web service layers. Using concepts from Service Oriented Architecture (SOA), you might have an authentication service, and a web request handling service. These services are a lot like a class that is always instantiated, there waiting to process requests. Your web request handling service would take a request, and maybe first call into the authentication service to verify credentials before honoring the request. SOA is one of those things I think should be used only when appropriate. It some cases just using Object Oriented techniques will give you the same benefits. Not always though. SOA, when done right, is more scalable, so it really depends on whether SOA offers you additional benefits that you need.
The Webservice would be responsible for receiving requests from the web, parsing/interpreting them, and acting on those requests by making calls into your business layer to update or retrieve data.
So the concept here would be that you could have many users of your service who publish their song updates through your service.
Your client would have a "web service client" layer which would be responible for formatting requests into messages, sending them to the web service, and retrieving messages from the web service. You would put very little application "logic" in your web service layer.
Now all this is probably overkill and inefficient for what you are wanting to do since you just want something for yourself, but it's the basic anatomy of a lot of webservice applications and would be a good learning exercise. The whole purpose of the layers is decoupling and simplicity. While more layers/components makes the application overall more complex, it means each component is simpler. This means it's easier to wrap your head around problems when you are only dealing with one component which interacts with only a couple other components(the sourounding layer). So there is a careful balance between few components and many components. Too few and they become monolithic and difficult to manage. Too many, and they become intertwined in complex ways. I have heard it said something along the lines of "If a class is getting too big and too complex, then split it up into a few more classes". In essence, don't start subdividing stuff for the heck of it just because it sounds like the right thing to do. Evaluate how complex your component is going to be before deciding if you want to split it up. Sometimes for simple cases your have a layer serving more than one purpose, for the sake of getting it done faster and making the overall design simpler. The point is, apply these concepts where appropriate. You will learn what is appropriate with experience, and you obviously understand that you can learn the most by "doing".
"Can vbscript call a COM component?" You can compile .NET DLLs with COM support. Many older things can call COM dlls.
I googled: vbscript dll
and got this: VB Script and DLLs
"Is that safe to do?" Your webservice will be where you would be most concerned with security. It's safe only if you design with security in mind and don't screw up. We all screw up sometimes though, which means there is no guarantee of it being perfectly secure.

Resources