What's the purpose of COM+ library applications? - windows

When a COM+ application is created the wizard offers to choose between a library and a server application.
A server application is activated in a separate process and this can be used to cheaply interop 64-bit consumers with 32-bit in-proc COM components.
What's the use of library applications that are activated right in the caller process? Why use them instead of plain old in-proc COM servers?

There are several:
Performance - it is a bit faster as you don't have to go through the message automation (marshalling and unmarshalling)
Isolation - if many different Applications are using the Library, then each will have it's own copy. This point is most important when dealing with the differences between an MTA (Multi Threaded Apartment) and a STA (Single Threaded Apartment Model)
THE IN-PROC Server (which is really an out of processes, out of the caller's process) is shared by all different callers (this is a great way to have cheap IPC/RPC)
Ok I am editing with a few more definitions, and a bit more references:
Context is really all the state around the use of an object.
causality is really a thread like concept indicating the use of an object in a context. ("A causality is a distributed chain of COM method calls that spans any number of contexts in any number of processes" - from ISBN: 0-201-61594-0)
Those to concepts are discussed in about 30 pages of chapter 2 from Tim Ewald's excellent book "Transactional COM+" ISBN: 0-201-61594-0
So taking a direct quote from the summary of chapter 2:
"An object can interact with its context using object context and with a given causality using call context. These two objects provide interfaces for interacting with COM+ runtime services. This style of coding, 'reaching into context' makes COM+ development very different from classic COM development."
Finally, Chapter 2 has a discussion "Why Library Applications?",
(which is different from your question, Why not just plain old COM?)
His arguments mainly indicate the same reasons from using a COM object,
1. Each application has it's own instance.
2. Load into non- DLLhost.exe process.
3. Much Less Overhead.
4. Simple Deploy of common Objects.
So the bottom line is that if you are not Distributed, and Not Transactional In Nature, there may be no real advantage to using COM+ over COM. But if you write a COM+ application and deploy it as a LIBRARY application, it will behave a little bit more like a COM component.
Hope that helps.

The main purpose is to benefit from COM+ application contexts.
CoGetObjectContext for IObjectContext or IObjectContextActivity will return E_NOTINTERFACE from pure in-process component, while it will successfully work in a COM+ library application (and a server application of course).
The security context is also available through CoGetCallContext for ISecurityCallContext.
It has nothing to do with performance or isolation.
As a site note, one way to check what's available to COM+ library applications is to run dcomcnfg.exe navigate to Component Services, Computers, My Computer, COM+ application, create a new library application and check what's still enabled (as opposed to a server application).

Related

How does COM/Automation do IPC under the hood?

In its simplest form, COM allows you to instantiate C++-like classes from DLL in your application. Basically it's a glorified wrapper around LoadLibrary and some conventions regarding the interface. This is called using an in-process component.
But COM also supports out-of-process components. If you instantiate a class from such a component, COM starts a new process. Your objects live in said process, and are marshalled transparently over to you, so you don't care too much about where they live. They might even be on a different computer (DCOM). You can also fetch objects from already running applications. A well-known example is controlling MS Office via a script. This is called Automation (formerly OLE Automation, and there is a bit of confusion around what exactly this term encompasses).
There are a couple of nice articles explaining how (in-process) COM works low-level (e.g. COM from scratch. I'd like to know how it works when your component is out-of-process. Especially, what IPC does COM use beneath the hood to communicate between the processes? Window messages, shared memory, sockets, or something else? MSDN lists COM as an IPC method by itself, but I'm guessing it has to use something else underneath. Are different IPC methods used in different cases (instantiating an OOP component from C++, accessing an Excel document from VBScript, embedding a document in another via OLE)? It seems like it is all the same underlying technology. And lastly, how does marshalling fit in the picture? I believe it is neccessary to serialize method parameters for transmitting between processes, correct?
According to this MSDN article, it's RPC.
When you instantiate an OOP component, the COM subsystem generates an in-process proxy. This proxy is responsible for packing parameters and unpacking return values. It also generates a stub in the server process, which, expectably, unpacks parameters and packs return values.
Interestingly enough, the whole marshaling process can be customized, by implementing IMarshal.
DCOM was originally added as an extension to COM, precisely for cross apartment calls. Note cross apartment calls are not always from process to process. A process can have many apartments (0 or 1 MTA and/or 0 to n STAs, etc.) . There is at least one apartment per process, etc.
DCOM, some kind of a "middleware", needed a technology for all this low-level work: data representation, caller/callee convention, memory management, wire marshaling, session handling, security, error handling, etc. so Microsoft naturally used the in-house implementation of DCE/RPC: MSRPC. Note that as Microsoft says on its site,
"With the exception of some of its advanced features, Microsoft RPC is
interoperable with other vendors’ implementations of OSF RPC."
There was some tentative work to have all this implemented by other vendors, but they were basically killed by the rise of the internet and HTTP.
Also, note this RPC uses Windows Messages for STA apartement messages. I suggest you read carefully this document (not available any more on Microsoft site, shame on them :-) for more details:
DCOM Architecture by Markus Horstmann and Mary Kirtland - July 23, 1997 .
See also this interesting case study about a DCOM/RCP issue that should tell you a lot of how RPC over Windows message works under the scene: Troubleshooting a DCOM issue: Case Study

Does Mac OS X have an Microsoft RPC equivalent?

Microsoft RPC provides an IPC mechanism that can be done in a function-calling manner. This has been extremely helpful for my project where my main service delegates tasks to a child process, and functions in the child process can be called as if they were implemented in the main service. That takes away the burden of having to serialize abstract data and define custom protocols when using other IPC mechanisms such as named pipes, sockets, protobuf, etc. I'm aware that RPC does use them internally.
I've read an article on implementing COM for Mac OS X which is probably the closest thing to what I need. If I find no other no other way of implementing the type of IPC I need, I'm probably going to go with COM, but I thought I'd make sure that I'm not missing anything.
Have a look at "XPC Services". From the documentation:
XPC services are managed by launchd and provide services to a single
application. They are typically used to divide an application into
smaller parts. This can be used to improve reliability by limiting the
impact if a process crashes, and to improve security by limiting the
impact if a process is compromised.
And later in that guide:
The NSXPCConnection API is an Objective-C-based API that provides a
remote procedure call mechanism, allowing the client application to
call methods on proxy objects that transparently relay those calls to
corresponding objects in the service helper and vice-versa.

What is the proper way to make a GUI

I am working on a series of different software products. They are quite old, so we're in the process of re-factoring/improving them. My co-worker had the idea of abstracting the GUI and having it run in its own process and communicate with the logical portion of the program via sockets. This will allow us to use the same GUI components with all of the different applications (keeping the same LAF). So my question is: Is this a valid practice for creating a GUI? Would I be better off keeping the GUI tied in with the rest of the program? what are the pros and cons of the different methods, and are there any other methods for implementing GUIs?
Thanks
Yes, it's a perfectly valid way to write a GUI program. This is roughly how web apps work -- the UI (browser) communicates to the business logic server (web server) over a socket.
It's a little bit unusual for a desktop application, but it's quite acceptable. The beauty of this solution is that it lets you write multiple rich clients for different platforms (think mobile app, windows app, browser-based app, etc.)
All you need to do is define the API that a GUI will need to talk to the back end. For example, it will need a way to get objects and save objects, and to receive notifications from the back end that the UI needs updating.
With service and presentation layers properly designed that should be perfectly all right. To summarize pros and cons in my opinion:
Pros:
UI not bound physically to logic, so logic layers can be remote (or even
standalone BL server for several clients). Let's call is "Business logic location independence".
Possibility to create different versions of GUI (and not only graphical - it would be possible to expose BL as a service, for example as a feed or reporting endpoint), "GUI platform independence", and also SOA approach.
Possibility to add a proxy between BL and GUI - for security and caching purpose. Or load balancer in front of application farm. Or an adapter to support "old" clients after significant BL changes. ("Resiliency and fail-safety"?)
Deployment could be easier to some extent (fixing bugs in UI wouldn't affect BL layer - just a consequence of binary module independence)
Ability to add "offline mode" to GUI.
Cons:
You're adding one more connection link, which could be yet another fail point, and some effort should be spent for testing that.
Increase of data traffic between GUI and BL, and probably more serialization work.
Need to track communication protocol changes and proper protocol versions maintenance.
(Negative side of proxy ability) Possibility of man-in-the-middle attack between GUI and BL.
Depends on the type of application.
Desktop applications
It makes sense if the server can be run on a dedicated server. It does not make sense if both the server and the GUI are going to be installed on each desktop (for most applications). Then use different projects/dlls to separate the UI/Business logic.
Web applications
Yes. Many web applications have separate service layer an uses SOAP for communication between the GUI and the service layer.
Sockets
Using vanilla sockets is seldom a good choice today. Why waste energy/time of building your own protocol and implementation when there are several excellent IPC frameworks available.
Update in response to comment
Divide and conquer. Break down the UI into as small components as possible to make them reusable. Put those components into a separate project/dll. A sample component can be a UserTable which presents a list of all users (taking a dependency of the interface IUserService).
Don't try to reuse the entire UI layer since it's doomed to fail. The reason is that if you try to make a UI which should be configurable and generic you'll probably end up spending more time on that than what it would have taken to build a specific UI using reusable components. And in the end, you need to add small "hacks" to make minor changes to the generic UI layer to suit each application. It WILL end up in a emss.
Instead reuse the above mentioned components to build a specific UI for each application.

Are there any reasons not to host a COM server in a COM+ application?

The simplest way to transform an in-proc COM server into an out-proc COM server is creating a COM+ application. What are the possible drawbacks of doing it this way?
I really can't think of any reason to create your own container or use a 3rd party one (if any exist) in favour of MTS/COM+. I mean it does all the things you'd want:
Lets you chose the distribution of
COM objects to container processes.
Lets you configure the account they
run under.
Monitors the container
processes and restarts if necessary
and can recycle them.
Even allows you
to host STA components in scenarios
where you need multiple threads
serviced by starting up multlple
worker processes. etc.
It's hard to imagine doing better than that without spending 6 months or more on it.
Turning the question inside out, I guess your anti-self might ask, "Why are there options besides the COM+ Server for an out-of-proc COM server? What advantages do these other hosting options provide?"
I don't have anything prepared, but I am imagining a table - with hosting options across the top as headers in various columns, and the particular attributes as headers in the rows. you might evaluate each hosting option on each different area or attribute.
The main difference I see is in the administrative model and capability, and in the flexibility. For example, hosting a COM server in a Windows Service gives you the windows service capabilities - auto start with OS boot; the admin UI associated to services.msc (both administrative/operational things), and the flexibility to add other interfaces into that service (flexibility).

Migrating to a GUI without losing business logic written in COBOL

We maintain a system that has over a million lines of COBOL code. Does someone have suggestions about how to migrate to a GUI (probably Windows based) without losing all the business logic we have written in COBOL? And yes, some of the business logic is buried inside the current user interface.
If it was me I would look into something like this:
NetCobol for Windows
It should be fairly easy to wrap your COBOL with an interface that exposes the functionality (if it isn't already written that way) and then call it from a .NET application.
It took us about 15 years to get off of our mainframe, because we didn't do something like this.
Writing a screen scraper is probably your best bet. Some of the major ERP systems have done this for years during a transition from server based apps to 3-tier applications. One i have worked with had loads of interesting features such as drop down lists for regularly used fields, date pop ups and even client based macro languages based on the scraping input.
These weren't great but worked well for the clients and made sure the applications still worked in a reliable fashion.
There is a lot of different ways to put this together, but if you put some thought into it you could probably use java or .net to create a desktop based application and with a little extra effort make a web based implementation.
Microfocus provide a tool called Enterprise Server which allows COBOL to interact with web services.
If you have a COBOL program A and another COBOL program B and A calls B via the interface section, the tool allows you to expose B's interface section as a web service.
For program A, you then generate a client proxy and A can now call B via a web service.
Of course, because B now has a web service any other type of program (command line, Windows application, Java, ASP etc.) can now also call it.
Using this approach, you can "nibble away at the edges" to move the GUI to a modern, browser based approach using something like ASP while still utilising the COBOL business engine.
And once you have a decent set of web services, these can be used for any new development which provides a way of moving away from COBOL in the longer term.
You could use an ESB to expose the back-end legacy services, and then code your GUI to invoke the services via the ESB.
Then you can begin replacing the legacy services with implementations on your new platform of choice.
The GUI need not be aware of the cut-over of back-end service implementation, as long as the interface to the service does not change - minor changes may hidden from the GUI by the ESB.
Business logic that resides in the legacy user interface layer will need to be refactored by extracting the business logic and exposing it as new services on the new platform to be consumed by the new GUI via the ESB.
As for the choice of platform for the new GUI, why not consider a web-based UI rather than a native windows platform, then at least updates to the UI will only need to be applied to the web-server rather than having to roll-out changes to each individual work-station.

Resources