I want to connect mirth with worldVista ehr and want fomat in hl7 or ccd - windows-vista

I want to connect Mirth with WorldVista (Ehr.dat -database)
Now from above database using mirth connect I want data in HL7 format or
CCD format
please guide me any help is highly appreciable.

I would create a service in Vista to provide an interface that Mirth can read from. Reading or modifying an internal database of a different application is an interoperability anti-pattern.
If there is no possible integration using services, you can always dump text files in a filesystem and grab them from Mirth with a File Reader source in your communication channel.

Related

How can I create an OpenTelemetry backend?

I am aware that there are open source backends like Jaeger, Zipkin etc and commercial vendors like DataDog, New Relic etc.
I wanted to know if there is any specifications we need to follow when creating a custom backend or if there is guide to on how to do it.
I know that i can host a server and send telemetry to the server's URL. When i do this via a Collector it is in proto format and if done via Exporter it is in json format. Are there any other such points to consider?
I wanted to know if there is any specifications we need to follow when creating a custom backend or if there is guide to on how to do it.
There is nothing like that. It's not within the scope of the OpenTelemetry project (at least for now). You are free to implement it in whatever way makes sense to you.
When i do this via a Collector it is in proto format and if done via Exporter it is in json format
This is not entirely correct. There are various options for protocol + encoding. There are both OTLP Proto over HTTP/gRPC and HTTP/JSON exporters.

Real-time sync between Oracle DB (source) and Grakn (destination)

Is there a tool to synchronise an Oracle and a Grakn database in real time? I couldn't find any information online so any help would be greatly appreciated.
I know Grakn offers GRPC through its client drivers which I'm assuming is the way to go to push things into Grakn.
And I'm aware there are triggers in the Oracle tables, but not sure if will slow down the application layer on the Oracle DB. How would this impact the performance of the BD?
Thanks!
There aren't any such tools that you can use OOTB. If you need to keep the Oracle system running then I think you've got two options:
Intercept the data as it comes in and dispatch to both Oracle and Grakn.
Find a triggering mechanism, and as you outline, read from Oracle and push the data to Grakn through its driver(s).
GRPC is built in to Grakn's drivers, and is the only way to communicate with a Grakn server. These drivers are optimised as such!

How can i send data from node-red to Hadoop?

I need a mechanism to send data from node-red, to be stored in HDFS (Hadoop).
I prefer the data to be streamed. I am thinking about using the 'websocket out' node to write the data to it and use a Flume agent to read.
I am new to node-red.
Could you please let know if I am in the right direction and clarify with some details if I am not? Any alternate approach should also be fine.
Update: node-red offers 'bluemixhdfs' node which is exclusively tied up with IBM bluemix whereas I am using only a vanilla hadoop.
I recently had the similar issue for a small project of mine. So I try to explain my approach.
A little background: In the application, I had to do some processing on real-time streaming data from different data sources. At the same time, I also needed to store the streaming data for future processing.
I used Apache Kafka message broker as an integration agent between Node-RED and HDFS (and also for Apache Spark Stream processing engine).
In Node-RED, I used Kafka node to publish streaming data from different data sources to separate topics in Kafka.
Node-RED flow with Streaming data sources and Apache Kafka
HDFS Sink Connector, a Kafka Connect component, is then used to store the streaming data to the HDFS.
Flow Architecture for Node-RED to HDFS and Spark Streaming using Kafka Message broker
This approach can also be adopted when many streaming data sources like IoT sensors, Stock market data, Social media data, weather api, etc. are to be connected as a single flow using Node-RED and then want to use HDFS for storing these data for further processing.
I'm afraid that I'm not a Hadoop expert and so probably can't provide an answer directly. However it looks like Kafka supports websockets and this should be reasonably performant.
Depending on your architecture though, you should pay some attention to websocket security. Unless NR and Hadoop are both on a private secured network, websockets may be tricky to secure properly.
I think that websocket performance would be reasonable as long as the data size per transaction isn't too large (kb rather than Gb). You will need to do some testing though as there are too many factors influencing the performance of Node-RED to easily predict whether it will have the performance you require.
Node-RED supports a great many types of connectivity so if websockets don't work in your architecture, there are plenty of others such as UNIX pipes, TCP or UDP connections.

sending value from cc3200 to my server using mqtt

How can I make my server to accept the data sent by cc3200 through mqtt protocol ?Made cc3200 to publish the values successfully to my server IP address but I don't know what should I do to make my server dump those incoming values into its database.Actually I use XAMPP for server functionalities.
any suggestion guys ?
Am using hivemq broker
If your primary goal is to have some telemetry data from CC3200 stored in the database, I would suggest that you take a look at this webinar. You can configure Kaa server to use one of multiple existing log appenders to publish your data to Spark, Cassandra, MongoDB, HDFS, Couchbase, etc. There are several major benefits of doing data collection with Kaa:
All of the data is structured end-to-end. You define telemetry data model in Kaa UI, which translates into Avro-compatible schemas, and generates object bindings in the Kaa SDK. Instead of writing boilerplate code for data marshalling, you just invoke SDK functions like this: kaa_logging_add_record(kaa_client_get_context(kaa_client)->log_collector, log_record); where log_record is a structure auto-generated by Kaa based on your data model. On the other end, in your analytics system, you receive structured data that you can immediately start processing and querying - no need for the custom interpretation code, it's auto-generated for you.
You can write to several destinations simultaneously: for example, save telemetry data into HDFS for warehousing, send to Spark for stream analytics, and push to your custom data processing/visualization service with REST. All of this is configurable by adding log appenders through the Kaa administrative UI.
Kaa takes care of the data delivery reliability and consistency. You can set up one or more reliable log appenders. It is not until all of the configured reliable appenders acknowledge a successful write that the client is instructed to remove the local data copy.
Kaa server is scalable and reliable out-of-the box. There is no single point of failure in the cluster. You can add more server capacity on the fly by spinning off more nodes. They would register against Zookeeper and the cluster would automatically rebalance the load. If there is a node failure, the clients automatically migrate to the remaining nodes.
Kaa is transport agnostic, so you can plug in pretty much any transport protocol implementation you like, including MQTT. The default protocol is similar to MQTT in the amount of overhead it introduces.
The integration instructions specifically for CC3200 are being prepared for the upcoming 0.8.0 release here.
Disclaimer: I work for a company behind Kaa open-source IoT platform.

Pump data into ActiveMQ from a JDBC data source

We have an application provided by a third party which takes a stream of market data (provided by said third party), and writes it into a JDBC compatible database.
The only configuration parameters it has are the JDBC connection string, plus settings allowing us to pick what pieces of data we'd like to be stored in this database.
This is very good for static data, but we'd like to feed this data into our internal ActiveMQ messaging fabric (in addition to writing it into the DB).
The database updates are triggered by pushes of market data to us. I'd like to have this application write the data directly to a set of MQ topics by implementing some kind of jdbc "facade" that would re-route the data directly into MQ.
What I don't want to do is poll the database for new information - as I want to keep the same fluidity of the data (e.g. fast moving stocks will generate a lot more data than slow moving - and we'd want to retain this).
Advice and pointers are very much welcome!
Camel is the answer, but potentially only if you're ok with polling the database. It's great for integration issues like this. If there was some other trigger that you could work with, you could use that to cause the database to be read.

Resources