How to send many messages in one go with MQ Visual Edit? - ibm-mq

I’m using MQ Visual Edit V3 to inject financial messages in a queue.
I’m able to inject one message but not several messages.
The separator I use is \n. 1 message = 1 row
But when I inject several messages only one is put in the queue and the content is the concatenation of all the messages.
Can you explain me how to send many messages in one go please?

On the Import File window, use Every line of each file option rather than using the '\n' delimiter.
Currently, the Import File window does not support '\n' delimiter. I'll add it to the user request list.
MQ Visual Edit does support explicitly using HEX values for CRLF or LF i.e. 0D0A or 0A in the Delimiter field. Just remember to select Delimited File and then Hexadecimal Delimiter

Related

Save a binary VB file (including RDW) from non-zOS file system, as a binary VB file in zOS without adding new RDW

I have downloaded a VB binary file from zOS, with RDWs but without BDW.
When I try to send it back it treats the existing RDW as part of the data and adds new RDW.
Is there a known way to do it, except editing it manually?
Switching the ftp transfer mode to block mode should do what you want:
quote mode b
You may, or may not need the quote prefix, depending on your ftp client.
Also, since you want the data set on z/OS to be RECFM=VB, you might need to tell the FTP server:
quote site recfm=vb lrecl=nnn
Where nnn is the maximum record length plus 4.

How to put a file to queue by using rfhutil and mq?

I think title is confusing. I try to explain it in detail.
I have a big xml file that contains a lot of messages for queue. The structure looks like: <'objects'><...>.....<...><'/objects'>. It's an array of objects tags and one item of objects is a message. I want to put this file that should be broken into a separate messages into queue(mq) by using rfhutil.
I know it's possible but I'm confused about delimiter. The whole menu looks puzzled.
Is anyone using rfhutil and mq? Maybe you have a guide for that. I couldn't find any information about rfhutil.
I hope I explained well.there is a pic of rfhutil menu. load Q
"Load Q" option is used to send all messages from file which was previously created with "Save Q" option. It will not properly parse your custom xml. Even if you format xml in a way that is parsable by custom delimiter, xml still contains only message payload and lacks message headers.
However rfhutil is part of lagerger IH03 SupportPack and there you will find:
additional software to send multiple custom messages like mqput
a manual how to do it (ih03.doc)
Simplest "out of head" solution would be to split xml in multiple smaller files and than send them all with one mqput2 command.

HL7 Message Document?

Is there a tool which can take 1000 Seperate HL7 Messages and combine them into a single document for 7edit? I need to run a test, and if I can do one document and choose send all, it will be better than me running it manually for each of these 1000 messages.
Yes, There exist a way to combine those messages in a single file. You can do that using any integration engine, I will take Mirth in this case.
Follow these steps in sequential order
Download Mirth Connect from here using the .exe installer (in case you don't have it).
Setup your account and do initial configuration on your local system.
Create a Channel called Appending Channel, put Source inbound and outbound connector as HL7v2.x.
Go to Source Tab, Put Connector type as File Reader. Give the location of the directory where your messages will reside(D:\x\read in my case). Make sure you have the directory shared
You can make Delete file after read as a Yes, which will prune the files after they are read from this location.If you do a NO, then specify where you want to move those files to.
Put Process Batch files as a No.
Go to Destinations tab, create a Destination called as Appender and make it a File Writer type.
Give the directory(D:\x\Output in my case) where your final file will be placed.Give the file name as final.txt.
Choose Append on the file exists tab.
In Template, Drag Raw Data from the list on the right hand side, and put it here or else what you can do is type ${message.rawData} in the template section.
Save Channel and Deploy it.
Place all your messages in the read folder (mentioned above), and wait for Mirth to poll the folder (default setting is 1000 ms).
Once that is done, go to final.txt to see all the messages appended in the same file.
The downside is that even though this process is 100 percent working, the message thus appended will not be seperated by any means. So it will look like below
|2688684|||||||||||||||||||||||||199912271408||||||002376853MSH|^~\&|EPIC|EPICADT|
^ End of first message
You don't need any tool for that. 7edit is able to read multi-message files. You just need to append each message into one single text file like this (two ADT messages):
MSH|^~\&|SystemA|CompanyA|SystemB|CompanyB|20121116122025||ADT^A01|101|T|2.5||||||UNICODE UTF-8
EVN|A01|20130823080958
PID|||1000||Lastname^Firstname
PV1||I
MSH|^~\&|SystemA|CompanyA|SystemB|CompanyB|20121116122026||ADT^A01|102|T|2.5||||||UNICODE UTF-8
EVN|A01|20130823080958
PID|||1000||Lastname^Firstname
PV1||I
Open this file with 7edit and you will see this (multiple messages):
Now you can send all messages at once by pressing on Send and then select All Messages:
It is that simple - no other tool necessary (just to make the append in one file maybe)
You could also try to use HL7Browser (www.nule.org), a tool that is similar to 7Edit, with less features but free.
You should be able to open many single HL7 messages files, HL7Browser will cache them in its viewer and should allow you to save them all to a single file.
Hope helps
Davide
if you have multiple HL7 files in one folder and want to combine them into 1 hl7 file, you can do following:
create a batch file in this folder named combine.cmd
write following into this batch file
del combined.hl7
for %%f in (*.hl7) do type "%%f" >> combined.hl
move combined.hl combined.hl7
run this batch file
result: all hl7 files in this folder are combined into a single file called "combined.hl7"

Google Protocol Buffers - Storing messages into file

I'm using google protocol buffer to serialize equity market data (ie. timestamp, bid,ask fields).
I can store one message into a file and deserialize it without issue.
How can I store multiple messages into a single file? Not sure how I can separate the messages. I need to be able to append new messages to the file on the fly.
I would recommend using the writeDelimitedTo(OutputStream) and parseDelimitedFrom(InputStream) methods on Message objects. writeDelimitedTo writes the length of the message before the message itself; parseDelimitedFrom then uses that length to read only one message and no farther. This allows multiple messages to be written to a single OutputStream to then be parsed separately. For more information, see https://developers.google.com/protocol-buffers/docs/reference/java/com/google/protobuf/MessageLite#writeDelimitedTo(java.io.OutputStream)
From the docs:
http://code.google.com/apis/protocolbuffers/docs/techniques.html#streaming
Streaming Multiple Messages
If you want to write multiple messages to a single file or stream, it
is up to you to keep track of where one message ends and the next
begins. The Protocol Buffer wire format is not self-delimiting, so
protocol buffer parsers cannot determine where a message ends on their
own. The easiest way to solve this problem is to write the size of
each message before you write the message itself. When you read the
messages back in, you read the size, then read the bytes into a
separate buffer, then parse from that buffer. (If you want to avoid
copying bytes to a separate buffer, check out the CodedInputStream
class (in both C++ and Java) which can be told to limit reads to a
certain number of bytes.)
Protobuf does not include a terminator per outermost record, so you need to do that yourself. The simplest approach is to prefix the data with the length of the record that follows. Personally, I tend to use the approach of writing a string-header (for an arbitrary field number), then the length as a "varint" - this means the entire document is then itself a valid protobuf, and could be consumed as an object with a "repeated" element, however, just a fixed-length (typically 32-bit little-endian) marker would do just as well. With any such storage, it is appendable as you require.
If you're looking for a C++ solution, Kenton Varda submitted a patch to protobuf around August 2015 that adds support for writeDelimitedTo() and readDelimitedFrom() calls that will serialize/deserialize a sequence of proto messages to/from a file in a way that's compatible with the Java version of these calls. Unfortunately this patch hasn't been approved yet, so if you want the functionality you'll need to merge it yourself.
Another option is Google has open sourced protobuf file reading/writing code through other projects. The or-tools library, for example, contains the classes RecordReader and RecordWriter that serialize/deserialize a proto stream to a file.
If you would like stand-alone versions of these classes that have almost no external dependencies, I have a fork of or-tools that contains only these classes. See: https://github.com/moof2k/recordio
Reading and writing with these classes is straightforward:
File* file = File::Open("proto.log", "w");
RecordWriter writer(file);
writer.WriteProtocolMessage(msg1);
writer.WriteProtocolMessage(msg2);
...
writer.Close();
An easier way is to base64 encode each message and store it as a record per line.

Should I use a binary or a text file for storing protobuf messages?

Using Google protobuf, I am saving my serialized messaged data to a file - in each file there are several messages. We have both C++ and Python versions of the code, so I need to use protobuf functions that are available in both languages. I have experimented with using SerializeToArray and SerializeAsString and there seems to be the following unfortunate conditions:
SerializeToArray: As suggested in one answer, the best way to use this is to prefix each message with it's data size. This would work great for C++, but in Python it doesn't look like this is possible - am I wrong?
SerializeAsString: This generates a serialized string equivalent to it's binary counterpart - which I can save to a file, but what happens if one of the characters in the serialization result is \n - how do we find line endings, or the ending of messages for that matter?
Update:
Please allow me to rephrase slightly. As I understand it, I cannot write binary data in C++ because then our Python application cannot read the data, since it can only parse string serialized messages. Should I then instead use SerializeAsString in both C++ and Python? If yes, then is it best practice to store such data in a text file rather than a binary file? My gut feeling is binary, but as you can see this doesn't look like an option.
We have had great success base64 encoding the messages, and using a simple \n to separate messages. This will ofcoirse depend a lot on your use - we need to store the messages in "log" files. It naturally has overhead encoding/decoding this - but this has not even remotely been an issue for us.
The advantage of keeping these messages as line separated text has so far been invaluable for maintenance and debugging. Figure out how many messages are in a file ? wc -l . Find the Nth message - head ... | tail. Figure out what's wrong with a record on a remote system you need to access through 2 VPNs and a citrix solution ? copy paste the message and mail it to the programmer.
The best practice for concatenating messages in this way is to prepend each message with its size. That way you read in the size (try a 32bit int or something), then read that number of bytes into a buffer and deserialize it. Then read the next size, etc. etc.
The same goes for writing, you first write out the size of the message, then the message itself.
See Streaming Multiple Messages on the protobuf documentation for more information.
Protobuf is a binary format, so reading and writing should be done as binary, not text.
If you don't want binary format, you should consider using something other than protobuf (there are lots of textual data formats, such as XML, JSON, CSV); just using text abstractions is not enough.

Resources