What is the development environment for TIBCO Business Works? - tibco

I see all these job posts for TIBCO developer but from tibco.com I couldn't really dig what a developer does codewise on this platform because that is geared more towards endusers. Is it a JAVA based platform?

I'll assume that you are talking about TIBCO Business Works as this is where the majority of the development is done.
TIBCO Business Works is a Java based platform, however normally very little development is done in Java. At it's heart TIBCO Business Works is a XSLT processing engine with lots (and I mean lots) of connectivity components (called Starters and Activities in the TIBCO world).
Development is done graphically by linking the Starter to Activities and eventually to a End Activity, very much like a traditional process diagram. You can see what I mean in the top right of this screen shot:
Each of these diagrams is called a Process Definition and the closest equivalent in Java is a method, however they are more closely related to C functions as there is no concept of a Class for Process Definitions.
Looking closely, you'll notice that the StorePO Publish To Adapter Activity is selected. In the bottom right you can see the input to this activity is "mapped" from other process data (which can be either the output from the Start, or the output from other activities). This mapping is actually XSLT, just represented visually. So much so, that copying the root node of the mapping ("body" in this case) into a text document pastes as XSLT (you can even edit it there and copy it back if you are so inclined; good for when you need to do a search and replace).
Looking back at the Process Definition, there is a CheckInventory Call Process Activity. This is how you invoke another Process Definition from the one you are working on. In fact, this Process Definition has a plain Start Activity, which indicates that it it invoked from another Process Definition.
Starter processes are Process Definitions that have a Process Starter instead of a Start Activity. The Process Starter triggers the invocation of the Process Definition based on some event. For instance, a JMS Queue Receiver Process Starter, will trigger when it receives a specific JMS message. There are many such Process Starters, including SOAP, HTTP, SMTP and even plain old TCP.
Likewise the are many Activities, including the ones above and JDBC and FTP.
Without actually having access to TIBCO Designer, the best way to beef up your skills for a TIBCO role is to focus on XPath and XSLT as that's mostly what you'll be working with.

TIBCO AMX Business works is a Java platform use for integration and automation purposes. It uses a plug in based architecture which means that you can extend the functionality. The product has changed from their 5.x version to 6.4.x version now to include micro services capabilities, containerization, cloud enablement, etc.
It uses a model driven development approach to reduce coding parts, that is why is so powerful.
You can find more information on the documentation official siteDocumentation TIBCO AMX BW
If you know spanish and want to learn about the 5.x version I have a set of video tutorials at TIBCO AMX BW Tutorials

Related

Suitable frameworks for ERP like application

I want to create a production management system to be used by a small manufacturing firm. The system will allow to document different stages in manufacturing of equipment. The requirements are as follows:
1.Non browser based interface.Need something like Swing or AWT based.While i understand the convenience of implementing a browser based solution,the business owner insists on a non browser interface
2.Accessed from multiple systems.These systems will allow CRUD operations on the central system (Thin Client?)
3.The application will not have more than 3 concurrent users.
I need some advice regarding what would be a good path for this kind of application.Currently, i'm thinking of using Griffon with RMI. However, i don't have much development experience.Read a bit about Apache River (Jini) too.Would it be a good idea to use Griffon with RMI?
Please provide some advice. Thanks.
EDIT:after some reading, i've decided to use more mainstream frameworks.So, Griffon is not an option. How about Jini(Apache River) or OSGI (Apache Felix)?
Hmm how is that a project which recently moved out of the incubation phase be considered mainstream vs a project that's been used in production for more than 3 years now? Anyway, Apache River gives you access to Jini technology and nothing more; meaning you can't achieve item #1 of your list with River alone. River may use RMI for accessing remote resources, however you can use RMI directly, or try out DRMI, Kryonet, Hessian/Burlap, Spring's HTTP Invoker, Protocol Buffers, Avro/Thrift, REST, SOAP, ZMQ and many more.
Even if you choose one of these options and/or River you still have to define the following things
application structure (file structure and runtime behavior)
build setup
dependency management
testing profiles
packaging
deployment strategies
These things and more are what Griffon brings to the table. As you may have noticed the framework allows you to build up applications by adding plugins, reducing thew amount of time you must allot for hunting down dependencies, setting up bootstrap mechanism and getting things done. On the subject of remoting technologies have a look at the different options Griffon has to offer http://artifacts.griffon-framework.org/tags/plugin/remoting
Even more, you can also combine OpenDolphin (http://open-dolphin.org/dolphin_website/Home.html) with Griffon. There's even an example application found at the opendolhpin repository showing a full client-server application (build with Griffon, Grails and OpenDolphin) https://github.com/canoo/open-dolphin/tree/master/dolphin-griffon-crud
With what seems to be your current understanding of the problem, I would not recommend OSGI, especially for a small manufacturing firm (Possible maintenance issues, depending on the "personel").
The main reason why I wouldn't advocate JINI or OSGI in your case is because of what you said
However, i don't have much development experience.
JINI (Apache River) is a viable option as long as you fully understand the concepts of LookupService and service registrations, etc. There's tons of RMI going on here with possible firewall implications...
OSGI is not difficult but you may have issues deciding how to structure your applications as well as interacting with services, etc.
Try to stick to the simplest approach that you can handle for the implementation (Flexible design in mind): Make it work and then improve it.
There are simple Web Services options such as Spring Remoting (over http/https for example), unless Spring introduces too many concepts and headaches for your app.

Message bus for OSGi services

I'm in the middle of a project where we will migrate a major software system based on a larger set of custom made technologies to be based on OSGi services. For this we will likely need a some sort of message bus that plays nice with OSGi services.
Sync and ASync delivery
Point-to-point only
Guaranteed delivery - preferable with persistence via files
Strict message ordered from the same client (Async mode), but necessarily from different clients
Support for process-to-process and node-to-node nice but not strictly required
Open source solutions will be preferred, but not required.
I have looked at eventbus (as recommended in https://stackoverflow.com/a/1953453/796559), but that does not seem to work well.
So the question is, which technologies match the above?
Tonny,
Having just come from a very similar, and successful project, please let me share my experience with you to save you some time and your company some money. First and foremost, ESB's were a really good idea 8 years ago when they were proposed. And, they solved an important problem: how do you define a business problem in a way that those pesky coders will understand? The goal was to develop a system that would allow a business person to create a software solution with little or no pesky developer interaction needed that would suck up money better spent on management bonuses.
To answer this the good folks at many organizations came up with JBI, BPMN and a host of other solutions that let business folks model the business processes they wanted to "digitize". But really, they were all flawed at a very critical level: they addressed business issues, but not integration issues. As such, many of these implementations were unsuccessful unless done by some high-priced consultant, and even then your prospects were sketchy.
At the same time, some really smart folks in the very late 90's published a book called "Enterprise Integration Patterns" which identified over 60 design patterns used to solve common integration problems. Many of the folks performing ESB stuff realized that their problem wasn't one of business modelling. Rather the problem was how to integrate thier existing applications. To help solve this Michael Strachan and some really smart guys started the Apache Software Foundation Project "Camel". Camel is a strict implementation of Enterprise Integration Patterns in addition to a huge number of components designed to allow folks like you and I to hook stuff together.
So, if you think of your business process as simply a need to send data from one application to another to another, with the appropriate data transformations between, then Camel is your answer. Now, what if you want to base the "route" (a specified series of application endpoints you want to send data thorugh) off of a set of configurable rules in a database? Well, Camel can do that too! There's an endpoint for that! Anyhow, dont' do the traditional ESB, its old and busted. And Absolutely do the camel thing.
Please let me know if this helps.
The OSGi specification defines a component "Event Admin" which is a lightweight pub-sub event subsystem.
From the RFC0157:
Event Admin specifies a means for an event source to send events to
event listeners. Event sources can create events with a topic and
properties and request Event Admin to deliver the events to event
listeners which have declared interest in specific event topics and/or
property values. The event source can request synchronous (and
unordered) delivery or asynchronous (and ordered) delivery.
Compared to your requirements, it would score as follows:
Sync and ASync delivery: Check
Point-to-point only: No. Pub-Sub
Guaranteed delivery - preferable with persistence via files: NO
Strict message ordered from the same client (Async mode): YES
Support for process-to-process: if (process == OSGi service) -> Yes
Support for node-to-node: Not yet. The guys of
Distributed OSGi have been working on this, but I've not seen
anything concrete.
I like the concept of Camel, but recently decided to go for the (lighter) Event Admin as my requirements are limited. +1 to Mike on the Camel motivation. I'd look into it and then compare options before deciding.
Aren't you looking for an ESB? ServiceMix is a:
flexible, open-source integration container that unifies the features and functionality of Apache ActiveMQ, Camel, CXF, ODE, Karaf into a powerful runtime platform you can use to build your own integrations solutions. It provides a complete, enterprise ready ESB exclusively powered by OSGi.
iPOJO Event Admin Handlers are a nicer-to-use way to access the Event Admin service mentioned by #maasg.
looks like you are talking about an ESB here. If its the case, then you might have look at wso2 ESB. It is powered by apache synapse. it uses OSGi as the modular framework, so that you can add/remove features according to your requirement. There is a whole product stack from wso2 like message brokers, Business process servers (ODE), etc based on the same OSGi core platform.
disclaimer : I work for wso2.

What is the proper way to make a GUI

I am working on a series of different software products. They are quite old, so we're in the process of re-factoring/improving them. My co-worker had the idea of abstracting the GUI and having it run in its own process and communicate with the logical portion of the program via sockets. This will allow us to use the same GUI components with all of the different applications (keeping the same LAF). So my question is: Is this a valid practice for creating a GUI? Would I be better off keeping the GUI tied in with the rest of the program? what are the pros and cons of the different methods, and are there any other methods for implementing GUIs?
Thanks
Yes, it's a perfectly valid way to write a GUI program. This is roughly how web apps work -- the UI (browser) communicates to the business logic server (web server) over a socket.
It's a little bit unusual for a desktop application, but it's quite acceptable. The beauty of this solution is that it lets you write multiple rich clients for different platforms (think mobile app, windows app, browser-based app, etc.)
All you need to do is define the API that a GUI will need to talk to the back end. For example, it will need a way to get objects and save objects, and to receive notifications from the back end that the UI needs updating.
With service and presentation layers properly designed that should be perfectly all right. To summarize pros and cons in my opinion:
Pros:
UI not bound physically to logic, so logic layers can be remote (or even
standalone BL server for several clients). Let's call is "Business logic location independence".
Possibility to create different versions of GUI (and not only graphical - it would be possible to expose BL as a service, for example as a feed or reporting endpoint), "GUI platform independence", and also SOA approach.
Possibility to add a proxy between BL and GUI - for security and caching purpose. Or load balancer in front of application farm. Or an adapter to support "old" clients after significant BL changes. ("Resiliency and fail-safety"?)
Deployment could be easier to some extent (fixing bugs in UI wouldn't affect BL layer - just a consequence of binary module independence)
Ability to add "offline mode" to GUI.
Cons:
You're adding one more connection link, which could be yet another fail point, and some effort should be spent for testing that.
Increase of data traffic between GUI and BL, and probably more serialization work.
Need to track communication protocol changes and proper protocol versions maintenance.
(Negative side of proxy ability) Possibility of man-in-the-middle attack between GUI and BL.
Depends on the type of application.
Desktop applications
It makes sense if the server can be run on a dedicated server. It does not make sense if both the server and the GUI are going to be installed on each desktop (for most applications). Then use different projects/dlls to separate the UI/Business logic.
Web applications
Yes. Many web applications have separate service layer an uses SOAP for communication between the GUI and the service layer.
Sockets
Using vanilla sockets is seldom a good choice today. Why waste energy/time of building your own protocol and implementation when there are several excellent IPC frameworks available.
Update in response to comment
Divide and conquer. Break down the UI into as small components as possible to make them reusable. Put those components into a separate project/dll. A sample component can be a UserTable which presents a list of all users (taking a dependency of the interface IUserService).
Don't try to reuse the entire UI layer since it's doomed to fail. The reason is that if you try to make a UI which should be configurable and generic you'll probably end up spending more time on that than what it would have taken to build a specific UI using reusable components. And in the end, you need to add small "hacks" to make minor changes to the generic UI layer to suit each application. It WILL end up in a emss.
Instead reuse the above mentioned components to build a specific UI for each application.

Performance monitoring all layers of a system

I use several loadtesting tools (Loadrunner, JMeter, NeoLoad) to performance test different applications. Im wondering if it is possible to monitor all layers of an application stack so for example. Say i have the following data chain.
Loadbalancer <-x-> Application Server <-x-> RMI <-x-> Java Application <-x-> MQ <-x-> Legacy application <-x-> Database
Where i have marked the x in the chain i am interested in monitoring, for example avg responsetimes.
Obviously we could simply create a wrapper on all endpoints which would gather the statistics for us and maybe we could import it into loadrunner or other loadtesting tools and sideline hem with the tools inbuilt performance statistics, but maybe there is tools/applications which already does this?
If not, how should we proceed, in order to gather this kind of statistics?
The standard for this was supposed to be Application Response Measurement (ARM). It was a cross language set of APIs that did just what you were looking for. The issue is that the products that implement this spec all tend to be big, expensive "enterprise" level monitoring tools. Think multi-week installs, consultants, more infrastructure and lots of buzzwords.
Still, if this is a mission critical app with a mission critical budget, this may be what you need. But you may be able to build your own that does just enough without too much effort. A quick search turns up at least one open source ARM implementation if you still want to use that API.
Another option is to simply to have transactions you can run against each tier of the system to check general responsiveness. For example you can have a static web page on the LB, a no-op tx on the app server, a "hello" servlet on the Java app, put a message directly on the queue, etc. During a performance / load test, these could be hit directly by the load testing tool or you could write a wrapper servlet / application call that does this as a single HTTP (RMI?) call. Running these a few times a minute won't add too much load to the system, but it should help you pinpoint which tier is slower. The nice thing about this approach is that it also works in production, just watch out for security issues.
For single user kind of test, where you know you have problem (e.g. this tx is "slow"), I have also had pretty good luck with network tracing. It's very tedious, but when you aren't sure what tier is slow, starting up a network trace on a few machines and running a single tx usually gives a good idea of what the system is doing.
I have handled this decomposition a number of ways in the past. The first is at a very low level using protocol analyzer dumped data to find the time points where a conversation leaves tier X and enters tier Y. The second method is through the use of log examination for the various tiers. Something that can make your examination quite usefule in this case is a common log server for all of your components (syslog, Rsyslog, etc....) and a nice log parsing tool, such as the freely available Microsoft Logparser. The third method utilization of the audit trail for an application stored in the database. You may find this when working on enterprise services bus style applications which have a consumer/producer model and a bus to pass information rather than a direct connection. The audit trails I have seen are typically stored in a database and allow the tracking of an individual transaction through the entire application infrastructure. Your Load balancer, as a network device, may be out of the hunt on this one.
Note, if you go the protocol analyzer or log route, then be sure and synchronize all of your source information devices to a common time server. Having one of your collectors (analyzer, app log) off on a time stamp basis can really be a hair pulling experience when you get into the analysis phase.
As to how you move from your collected data into LoadRunner, that part is very mechanical. The Analysis program supports an interface to import external datapoints. The format is very specific and is documented in both help and the online docs. This import process works very well, as I often have to use it for collection of statistics from hosts which I do not have direct monitoring access to, but which need to be included as a part of the monitored test infrastructure.
James Pulley
Moderator (YahooGroups LoadRunner, Advanced-Loadrunner; GoogleGroups lr-LoadRunner; Linkedin LoadRunner, LoadRunnerByTheHour; SQAForums LoadRunner, WinRunner)

Migrating to a GUI without losing business logic written in COBOL

We maintain a system that has over a million lines of COBOL code. Does someone have suggestions about how to migrate to a GUI (probably Windows based) without losing all the business logic we have written in COBOL? And yes, some of the business logic is buried inside the current user interface.
If it was me I would look into something like this:
NetCobol for Windows
It should be fairly easy to wrap your COBOL with an interface that exposes the functionality (if it isn't already written that way) and then call it from a .NET application.
It took us about 15 years to get off of our mainframe, because we didn't do something like this.
Writing a screen scraper is probably your best bet. Some of the major ERP systems have done this for years during a transition from server based apps to 3-tier applications. One i have worked with had loads of interesting features such as drop down lists for regularly used fields, date pop ups and even client based macro languages based on the scraping input.
These weren't great but worked well for the clients and made sure the applications still worked in a reliable fashion.
There is a lot of different ways to put this together, but if you put some thought into it you could probably use java or .net to create a desktop based application and with a little extra effort make a web based implementation.
Microfocus provide a tool called Enterprise Server which allows COBOL to interact with web services.
If you have a COBOL program A and another COBOL program B and A calls B via the interface section, the tool allows you to expose B's interface section as a web service.
For program A, you then generate a client proxy and A can now call B via a web service.
Of course, because B now has a web service any other type of program (command line, Windows application, Java, ASP etc.) can now also call it.
Using this approach, you can "nibble away at the edges" to move the GUI to a modern, browser based approach using something like ASP while still utilising the COBOL business engine.
And once you have a decent set of web services, these can be used for any new development which provides a way of moving away from COBOL in the longer term.
You could use an ESB to expose the back-end legacy services, and then code your GUI to invoke the services via the ESB.
Then you can begin replacing the legacy services with implementations on your new platform of choice.
The GUI need not be aware of the cut-over of back-end service implementation, as long as the interface to the service does not change - minor changes may hidden from the GUI by the ESB.
Business logic that resides in the legacy user interface layer will need to be refactored by extracting the business logic and exposing it as new services on the new platform to be consumed by the new GUI via the ESB.
As for the choice of platform for the new GUI, why not consider a web-based UI rather than a native windows platform, then at least updates to the UI will only need to be applied to the web-server rather than having to roll-out changes to each individual work-station.

Resources