Is Mirth necessary if you know how to do ReST with FHIR? - hl7-fhir

new to the whole health integration world. Trying to sort out from the promo sites whether Mirth Connect is even necessary for people like us. If we're integrating with a system like Cerner or Epic, and we're comfortable writing lower level code and building services and so on ourselves, does Mirth Connect actually do anything a software shop wouldn't want to do themselves? If so, what would that be?

In principle, an interface engine buffers your system from the other system. For a once off implementation, that's irrelevant. But as you connect multiple systems to multiple systems (whether they differ by configuration or software type) an interface engine becomes more and more useful. That's still going to be true whether you're doing FHIR or anything else

Related

Should event driven architecture be targeted for all data & analytics platforms?

For example,
You have an IT estate where a mix of batch and real-time data sources exists from multiple systems, e.g. ERP, Project management, asset, website, monitoring etc.
The aim is to integrate the datasources into a cloud environment (agnostic).
There is a need for reporting and analytics on combinations of all data sources.
Inevitably, some source systems are not capable of streaming, hence batch loading is required.
Potential use-cases for performing functionality/changes/updates based on the ingested data.
Given a steer for creating a future-proofed platform, architecturally, how would you look to design it?
It's a very open-end question, but there are some good principles you can adopt to help direct you in the right direction:
Avoid point-to-point integration, and get everything going through a few common points - ideally one. Using an API Gateway can be a good place to start, the big players (Azure, AWS, GCP) all have their own options, plus there's lots of decent independent ones like Tyk or Kong.
Batches and event-streams are totally different, but even then you can still potentially route them all through the gateway so that you get the centralised observability (reporting, analytics, alerting, etc).
Use standards-based API specifications where possible. A good REST based API, based off a proper resource model is a non-trivial undertaking, not sure if it fits with what you are doing if you are dealing with lots of disparate legacy integration. If you are going to adopt REST, use OpenAPI to specify the API's. Using this standard not only makes it easier for consumers, but also helps you with better tooling as many design, build and test tools support OpenAPI. There's also AsyncAPI for event/async API's
Do some architecture. Moving sh*t to cloud doesn't remove the sh*t - it just moves it to the cloud. Don't recreate old problems in a new place.
Work out the logical components in your new solution: what does each of them do (what's it's reason to exist)? Don't forget ancillary components like API catalogues, etc.
Think about layering the integration (usually depending on how they will be consumed and what role they need to play, e.g. system interface, orchestration, experience APIs, etc).
Want to handle data in a consistent way regardless of source (your 'agnostic' comment)? You'll need to think through how data is ingested and processed. This might lead you into more data / ETL centric considerations rather than integration ones.
Co-design. Is the integration mainly data coming in or going out? Is the integration with 3rd parties or strictly internal?
If you are designing for external / 3rd party consumers then a co-design process is advised, since you're essentially designing the API for them.
If the API's are for internal use, consider designing them for external use so that when/if you decide to do that later it's not so hard.
Taker a step back:
Continually ask yourselves "what problem are we trying to solve?". Usually, a technology initiate is successful if there's a well understood reason for doing it, which has solid buy-in from the business (non-IT).
Who wants the reporting, and why - what problem are they trying to solve?
As you mentioned its an IT estate aka enterprise level solution mix of batch and real time so first you have to identify what is end goal of this migration. You can think of refactoring applications. If you are trying to make it event driven then assess the refactoring efforts and cost. Separation of responsibility is the key factor for refactoring and migration.
If you are thinking about future proofing your solution then consider Cloud for storing and processing your data. Not necessary it will be cheap but mix of Cloud and on-prem could be a way. There are services available by cloud providers to move your data in minimal cost. Cloud native solutions are there for performing analysis on your data. Database migration service in AWS or Azure can move data and then capture on-going changes. So you can keep using on-prem db & apps and perform analysis for reporting on cloud. It will ease out load on your transactional DB. Most data sync from on-prem to cloud is near real time.

Passively Logging React App Performance in Production

I'm wondering if there are any utilities/patterns/paradigms/standards for monitoring React applications in production.
I've seen a lot of documentation about React performance debugging that recommends the Chrome Dev Tools (which are great, but aren't a passive way to monitor end user performance)
How could I log data to know how long users are waiting for components to mount or render?
The only thing I've thought of so far is creating a Loggable[Pure]Component that extends React.[Pure]Component whose constructor, componentWillMount/Update, and componentDidMount/Update methods log render/mount times to a server. Then, components I want to monitor can extend these components and, if need be, call super() in the lifecycle methods before doing their own work. To specifically know which components these metrics go to, I'd have to expose a method in the Loggable[Pure]Component class that does something silly like setUniqueId and then each derived class would have to call it in the constructor.
This all seems terrible and I'm very much hoping there are some things people out there have implemented, but I haven't found anything thus far.
I would have a look at some APM tools, they handle the frontend monitoring, and the backend monitoring as well. They all support react, and folks use these all the time for that use case. It really depends on your goals in the monitoring, are you doing this for fun? Do you have a startup? Are you working for a large enterprise? There are 3 major players in this market.
AppDynamics - Enterprise APM, handles the most complex apps. Unified product offering delivered SaaS or on-premises. Has deep database, server, and other monitoring.
Dynatrace - Enterprise APM, handles complex apps well. Fragmented portfolio, but the SaaS product is good. The SaaS product has limited depth in some ways. Handles server and cloud infrastructure monitoring well.
New Relic - Easy and cheap(er than others), not as in-depth as some other options. Tends to be popular with small companies. Does a good job monitoring cloud infrastructure services.
These products all do what you are looking for, but it depends on your goals with the data and how you plan to analyze it.
If you want something free and less functional there are ways to do this with open source, but you'll have to stand up and manage a pretty complex stack. Here is one option.
Check out boomerang, which can log/extract the metrics you are looking for, it doesn't "understand" react, but it should work. This data can be posted to many different systems. The best suited is likely the ELK stack (open source log analytics, and more). Here is one of several examples which marries these two together to provide analysis of the browser performance https://github.com/naukri-engineering/NewMonk

Ways of communications between Chromium container and VB application

We have a traditional VB application which are used for Organization operations. Now we are building a Hybrid application developed by using HTML5,CSS and Javascript which is targeted on Google Chromium desktop container. Now we are planning to provide a way to exchange large data like employees records between both of these 2 applications. Now my specific question is
What are the different ways to achieve communication between Chromium desktop container and VB application to exchange large chunks of data?
Sounds a bit painful no matter what.
Chrome Apps Architecture
All external processes are isolated from the app.
This would seem to suggest the obvious course is to use cloud data services, whether on public or private clouds.
I suspect that for political as well as practical reasons no cloud vendor goes to the trouble to provide VB/VBA-friendly APIs for their services. Mainly nobody wants to deal with support issues from the teeming hordes of casual coders the VB community is saddled with.
The VB6 community hasn't stepped up and taken care of this themselves either.
If you can limp along with the burdens of ".Net Inter Clop" (the usual MS answer) that might be a way to exploit existing API implementations.
Otherwise you might roll your own cloud. I see a few obvious services you'd want to implement in your cloud with lightweight APIs easily implemented in both of your development ecosystems:
Bulk Storage. I suggest WebDAV, which IIS supports. If you eschew the locking features then WebDAV API implementations are pretty easy in both JS and VB. Or buy (or scrounge open source) implementations of a more complete WebDAV client library.
DBMS. Pick any, implement a simple REST-like XML over HTTP API. Relatively easy to implement.
Push Notifications. I'd write a custom service accepting long-duration TCP connections from all clients, and with protocols and workflow à la Amazon SNS or Google Cloud Messaging. Such a service would be generally light in resource consumption but you'd probably want a dedicated box with OS tweaks to support a large number of active TCP connections.
Maybe optionally a message queue service?
Nothing novel here, these are all well established patterns.
All of the tools to do that are pretty off-the-shelf whether you want your cloud servers to be based on Windows, Linux, or generically Java anywhere.
Most of the effort will probably go into developing a consistent authentication model, access control model, and of course an integrated administration interface, monitoring, and logging to help keep operating overhead low and uptime high. Well, that and developer docs and training.
Ok, still a lot of work. Too bad there isn't a "cloud in the box" with the API libraries you'd need that you can buy off the shelf today.
Or perhaps I'm missing something obvious?

Performance monitoring all layers of a system

I use several loadtesting tools (Loadrunner, JMeter, NeoLoad) to performance test different applications. Im wondering if it is possible to monitor all layers of an application stack so for example. Say i have the following data chain.
Loadbalancer <-x-> Application Server <-x-> RMI <-x-> Java Application <-x-> MQ <-x-> Legacy application <-x-> Database
Where i have marked the x in the chain i am interested in monitoring, for example avg responsetimes.
Obviously we could simply create a wrapper on all endpoints which would gather the statistics for us and maybe we could import it into loadrunner or other loadtesting tools and sideline hem with the tools inbuilt performance statistics, but maybe there is tools/applications which already does this?
If not, how should we proceed, in order to gather this kind of statistics?
The standard for this was supposed to be Application Response Measurement (ARM). It was a cross language set of APIs that did just what you were looking for. The issue is that the products that implement this spec all tend to be big, expensive "enterprise" level monitoring tools. Think multi-week installs, consultants, more infrastructure and lots of buzzwords.
Still, if this is a mission critical app with a mission critical budget, this may be what you need. But you may be able to build your own that does just enough without too much effort. A quick search turns up at least one open source ARM implementation if you still want to use that API.
Another option is to simply to have transactions you can run against each tier of the system to check general responsiveness. For example you can have a static web page on the LB, a no-op tx on the app server, a "hello" servlet on the Java app, put a message directly on the queue, etc. During a performance / load test, these could be hit directly by the load testing tool or you could write a wrapper servlet / application call that does this as a single HTTP (RMI?) call. Running these a few times a minute won't add too much load to the system, but it should help you pinpoint which tier is slower. The nice thing about this approach is that it also works in production, just watch out for security issues.
For single user kind of test, where you know you have problem (e.g. this tx is "slow"), I have also had pretty good luck with network tracing. It's very tedious, but when you aren't sure what tier is slow, starting up a network trace on a few machines and running a single tx usually gives a good idea of what the system is doing.
I have handled this decomposition a number of ways in the past. The first is at a very low level using protocol analyzer dumped data to find the time points where a conversation leaves tier X and enters tier Y. The second method is through the use of log examination for the various tiers. Something that can make your examination quite usefule in this case is a common log server for all of your components (syslog, Rsyslog, etc....) and a nice log parsing tool, such as the freely available Microsoft Logparser. The third method utilization of the audit trail for an application stored in the database. You may find this when working on enterprise services bus style applications which have a consumer/producer model and a bus to pass information rather than a direct connection. The audit trails I have seen are typically stored in a database and allow the tracking of an individual transaction through the entire application infrastructure. Your Load balancer, as a network device, may be out of the hunt on this one.
Note, if you go the protocol analyzer or log route, then be sure and synchronize all of your source information devices to a common time server. Having one of your collectors (analyzer, app log) off on a time stamp basis can really be a hair pulling experience when you get into the analysis phase.
As to how you move from your collected data into LoadRunner, that part is very mechanical. The Analysis program supports an interface to import external datapoints. The format is very specific and is documented in both help and the online docs. This import process works very well, as I often have to use it for collection of statistics from hosts which I do not have direct monitoring access to, but which need to be included as a part of the monitored test infrastructure.
James Pulley
Moderator (YahooGroups LoadRunner, Advanced-Loadrunner; GoogleGroups lr-LoadRunner; Linkedin LoadRunner, LoadRunnerByTheHour; SQAForums LoadRunner, WinRunner)

Where would I go to learn write code that had to be very, very secure but DOES expose external services (running on a standard Windows or Linux OS)

Where would I go to learn write code that had to be very, very secure and that DOES expose external services (running on a standard Windows or Linux OS). Knowing what services can and cannot be safely exposed would be part of the issue. Note that I am not looking for a favorite choice between Linux and Windows, as the choice is not likely to be mine to make in any given case. However the level of security needs to be military grade.
I almost feel embarressed giving this as a for instance, but how would I know whether or not I could use, say, WCF, in such a setting.
High security is a difficult concept as it generally involves way more than just the code you wrote.
Basically every layer of the OSI model has to be taken into consideration. Things like, preventing capture of the data stream (or it being rerouted) between the end points (quantum cryptography).
At the higher levels, you have things like various things like
Physical security of the devices (all endpoints if possible).
Hardening the OS (e.g: closing ports, turning off unused services, using kerberos, VPN tunnels, and leveraging white lists of machines allowed to connect, etc);
Encrypting the data at rest (file encryption), in transmission (SSL), and in memory (column/table encryption).
Ensuring and enforcing proper authentication and authorization at every level (in app, in sql, etc).
Log EVERYTHING. At a minimal it should answer "who/what/when/where/how"
Along with the logging, Actively Monitor it. aka: intrusion detection.
Then we can move on to other things like looking at other attack vectors like sql injection, xss, internal / disgruntled employees, etc.
And once you've done all of that be prepared when a hacker gets away with everything they want simply by social engineering.
In short, the best tact to take in order to secure any computer related application is to listen to the ethos of Fox Mulder, and Trust No One. Another favorite of mine that applies is: It's only paranoia if they aren't after you.
You could use formal methods to (sort-of) prove the critical parts of your software. A tool like Frama-C (free, LGPL license, targetting embedded systems) could be relevant (at least if your software is critical, embedded, written in C).
But military grade don't mean much. Your client will (and should) define exactly the standards to respect. For instance, critical [civilian] aircraft software needs to follow something like DO-178C (or its predecessor, DO-178B). Different industries have different standards similar to that. (both railways and medical industries have their own standards, which might be different in North America than in Europe).
If your system (& client) is less demanding (i.e. no billion dollars or hundreds lives threatened by bugs) you could consider customizing your compiler or using some other tool. For example, GCC is customizable thru plugins or thru MELT extensions.
Don't forget that software reliability has a big price (that means a big cost for you, hence for your client).
Well, the question of where can be answered simply. Not in school. I suggest to create a learning path for yourself. Pick a technology that you like and learn it inside out. A basic book to get you started should suffice, however the rest of the stuff you learn as you go, or via the documentation of that technology.
For instance - learning under .NET (Microsoft) involves a basic A-Press text-book (i suggest Pro C# and The .NET 4.0 Platform). Thereafter searching through the .NET Framework Reference on MSDN will give you the rest.
If you are looking for WCF reference, I suggest the (MCTS Exam 70-503, Microsoft .NET Framework 3.5 Windows Communication Foundation) and MSDN.
Just keep in mind that not a single technology will achieve what you are looking for. For example: WCF co-mingles with WF (Windows Workflow Foundation), as well as SQL Data Services and Entity Framework. Being exposed to multiple technologies will definitely broaden your vision.
===============================================================================
WCF is a beast in this regard. Here are the advantages over some other means of communication:
Messages (data) passed between end points can be secured via message-level security (encryption). The transport channel chosen can also be secured at protocol level via transport layer security (encryption).
End points themselves can authorize and impersonate clients (client level security). You can implement end-to-end service tracing, health monitoring & performance counters, message logging, as well as forward and backward compatibility with newer/older clients (via graceful degradation of the message format, provided in WCF). If you chose to do so, you can even implement routing as fail-safe for your communications channel. WCF also supports transactions (ACID), concurrency, as well as a per-instance throttling, giving you the most flexibility in writing secure/robust military grade code.
In retrospect the security and flexibility of WCF are astonishing. A similiar technology (if not the same) is the WS-Security spec. It is part of the WS-* specifications for web services and deals with Xml signature and Xml encryption to provide secure communications channel between two end points.
The disadvantages of WS-* however is that it is a one-way means of communication. WCF can facilitate 2 way communication. A client can send a request to a server, but also a server can send requests to the client. WS-* dictates that a client can only send and receive responses to the server, but not vice versa.
I am not a WCF developer so i thought the highlights might provoke you into doing your own research. "There are hundreds of ways to skin an animal, neither of them is wrong..."

Resources