CCSID on MQ Managers on different platforms - ibm-mq

If a Solaris based MQ Manager is sending messages to an intermediate Windows MQ Manager that then sends on messages to a Linux MQ manager, does the CCSID need to be changed on them all to be the same? I don't think so but I am being pushed to do so...
I have an issue where a client is sending messages from an application on Solaris via an intermediate MQ Manager on Windows but the Linux MQ Manager final destination based application receives the message with CR/LF line ending characters it can't deal with. Should the receiver end group write a conversion exit program? Or an MCA?

IBM MQ does not deal with line feeds. It is not like transferring a file with ftp and using ascii mode where ftp will convert from Unix LF end of line to Window CR and LF end of line. For MQ it is just another character in the character set to translate. If the sending app is sending data that includes CR/LF characters, MQ will treat them as any other character in the data. If the receiving application expecting a file with end of lines to be sent as a MQ message it would be required to deal with the end of lines.
MQ Classes for Java applications running on a Solaris server default to CCSID 819, and a IBM MQ queue manager running on Solaris also will default to CCSID 819. CCSID 819 is described as ISO 8859-1 ASCII.
I created a file called test.txt containing the single word "test" and ran unix2dos against the file.
The output below shows the ASCII characters as HEX values:
$cat test.txt | hexdump -C
00000000 74 65 73 74 0d 0a |test..|
00000006
Note that in the above output the ASCII hex values 0d is the CR and 0a is the LF, these are the common Windows end of line.
If we assume that the default CCSID 819 is used for both the Solaris MQ Classes for Java application and the Solaris queue manager, then we can start out with the assumption that the above two hex values represent CR/LF at the end of each line.
You stated that your Windows queue manager has CCSID 437 which is typical for a US based Windows server. CCSID 437 is described as USA PC-DATA and is also ASCII.
Linux queue managers typically default to CCSID 1208. CCSID 1208 is described as UTF-8 with IBM PUA and is a Variable Byte character set it can have from 1 to four bytes per character. This can represent over a million characters including all ASCII characters. All 127 ASCII characters are represented in UTF-8 as the same single byte HEX values as in ASCII.
Going from ASCII to UTF-8 is loss less, going from UTF-8 to ASCII can have loss if non-ASCII characters are used since there is not a equivalent character in ASCII and MQ converts it to the default substitution character with HEX value 1A. I have seen this for example with a Euro symbol. Most if not all of the first 255 characters of UTF-8 are the same as CCSID 819.
Given the above CCSID assumptions the conversion would look like this:
Original IBM MQ Classes for Java app running on Solaris sending data with CR/LF end of line characters:
$cat test.txt | hexdump -C
00000000 74 65 73 74 0d 0a |test..|
00000006
Solaris Queue Manager Sender channel with CONVERT(YES) sending to Windows Queue Manager with CCSID 437:
cat test.txt | iconv -f IBM819 -t IBM437 | hexdump -C
00000000 74 65 73 74 0d 0a |test..|
00000006
As expected the output is the same since both 819 and 437 are ASCII character sets and the data was not representing any thing above the first 127 ASCII characters.
Solaris Queue Manager Sender channel with CONVERT(YES) sending to Windows Queue Manager with CCSID 437 Sender channel with CONVERT(YES) sending to Linux Queue Manager with CCSID 1208:
cat test.txt | iconv -f IBM819 -t IBM437 | iconv -f IBM437 -t UTF-8 | hexdump -C
00000000 74 65 73 74 0d 0a |test..|
00000006
As expected the output is the same since both 819 and 437 are ASCII character sets and the first 127 characters of UTF-8 (1208) are normal ASCII characters.
Summary: If the sending application is sending CR/LF in the data, MQ message conversion will not change this and if the CCSIDs in use are the above listed CCSIDs it does not even change the actual HEX character value. The sending application would need to change what they are sending or the receiving application would need to accommodate these characters.
A good reference on ASCII, UNICODE, UTF-8 and more can be found in the article "Unicode, UTF8 & Character Sets: The Ultimate Guide".

I smell a bad setup in your MQ environment and anyone who says to make them all the same doesn't know or doesn't understand MQ (& should have zero opinion about the configuration).
Also, setting the sender channel CONVERT attribute to YES is a VERY bad idea and ONLY should be used in extreme cases. see IBM Best Practices for more information. The ONLY time data conversion should be done is when the application issues an MQGET with Convert.
Since, Solaris has no clue about CR/LF, I going to guess at what is going wrong:
The message's Format field of MQMD has the value 'MQSTR' set.
The sender channel between the Solaris queue manager has the attribute CONVERT set to YES
The receiving application on Linux did NOT issue and MQGET with Convert.
Questions:
Did the Solaris application actually put the message with CR/lF?
Did the Solaris application set the MQMD Format field to 'MQSTR'?
Why are the messages hoping through the Windows queue manager? Why not make a direct connection between Solaris queue manager and the Linux queue manager?
What value does the CONVERT attribute of the Solaris sender channel have? i.e. Is it Yes or No?
The simplest solution is to have the Linux application issue an MQGET with Convert assuming the MQMD Format field is set to 'MQSTR'. MQ will automatically convert the message data to the correct platform.
If the MQMD Formet field is not set to 'MQSTR' then MQ will NOT convert the message data between platforms. That would imply that the Solaris application put the CR/LF in the message. I find this hard to believe. If a developer did this then they truly do not know MQ (and should not be programming it).

Related

Concurrent DOS + QEMU losing data through parallel port emulation

I have a piece of software running on concurrent DOS 3.1, which I emulate with QEMU 5.1.
In this program, there are several options to print data. The problem is that the data arriving to my host does not correspond to the data sent.
the command to start qemu:
qemu-system-i386 -chardev file,id=imp0,path=/path/to/file -parallel chardev:imp0 -hda DISK.Raw
So the output sent on parallel port of my guest is redirected to /path/to/file.
When I send the charactère 'é' from CDOS:
echo é>>PRN
The code page used on CDOS is Code Page 437, and in this charactere set, the charactère é is represented by 0x82, but on my host, instead, I receive the following:
cp437 é -> 0x82 ---------> host -> x1b52 x017b x1b52 x00
So I tried something else. I wrote the charactère 'é' in a file, and sent the file with nc.exe (from brutman and libmtcp), and with nc, the value stays 0x82.
So my question, what happen when I send my data to virtual parallel port? When does my data get transformed? Is it the parallel port on Concurrent DOS? Is it the QEMU? I can't figure out how to send my data through LPT1 properly.
I also tried this:
qemu-system-i386 -chardev socket,id=imp0,host=127.0.0.1,port=2222,server,nowait -parallel chardev:imp0 -hda DISK.Raw
I can read the socket fine, but same output as when I write in a file, the é get transformed to x1b52 x017b x1b52 x00.
The byte sequence "1B 52 01 7B 1B 52 00" is using Epson FX-style printer escape sequences (there's a reference here). Specifically, "1B 52 nn" is ESC R n which selects the international character set, where character set 0 is USA and 1 is France. So the sequence as a whole is "Select French character set; print byte 0x7b; select US character set". In the French character set for this printer standard, 0x7B is the e-acute é.
This is almost certainly the CDOS printer driver assuming that the thing on the end of PRN: is an Epson printer and emitting the appropriate escape sequences to output your text on that kind of printer.
OK, so i finally figured it out... After hours of searching where this printer.sys driver could be, or how to remove it, no Concurrent DOS, the setup command is "n". And of course, it is not listed in the "help" command...
Anyway, in there you can setup your printer, and select "no conversion" for the port you want. And it was actually on Epson MX-80/MX-100.
So thanks to Peter's answer, you led me to the right path !

NXP NTAG 424: write command returns 917E: "Length Error"". Why?

I have started working with the NXP NTAG 424 TT chip together with nfcpy and an Identive SCL3711 Reader/Writer. I can successfully send and receive APDU commands, securely authenticate myself and send and receive commands in encrypted communication mode.
However I can't read or write Data to the chip, and I don't know why. Here is what I do (mostly taken from the NXP application note Page 24):
I send the command "ISO Select NDEF application using DF Name"
00A404C07D276000085010100
Then I perform the secure authentication protocol via AuthenticatEV2First with key 0x00
I try to write some data as follows:
cmd_header = 02000000040000
cmd_data = 00D1FF00 (before padding)
cmd_data = 00D1FF00800000000000000000000000 (after padding)
The complete command which I send looks like this:
cla cmd P1&2| Lc |ISO Header | encrypted Data |LE
90 8D 00 00 1F 02 000000 040000 6688A4D75482FC972C2447A1A20F0AC9C073C1CF506B2BD3 00
However the chip only responds with 917E: "Length Error"" which translates to "Command size not allowed"
What am I doing wrong? It can't be the encryption, I tested that with various other commands (getTTStatus, SetConfiguration) and these all worked fine. I quadruple checked the header. Did I perhaps fail to select the correct File, or did I miss some other steps? Also what does "Command size not allowed" mean? This error is pretty cryptic to me (which is funny when working with encrypted chips :D).
Any help is greatly appreciated!
Best regards,
Phil
The length of "encrypted data" field in your case is 24 bytes, whereas the length which you have mentioned in ISO Header is "040000" i.e. 4 bytes.
Your encrypted data length should match with the length of data you are writing.
In your case there is mismatch in both lengths and resulting in error.
Hope the information is clear.
Cheers!

Sending a UNICODE string to A16 COMS Mainframe via TCP/IP

I need to send a UNICODE string message to A16 COMS (Mainframe) via TCP/IP. What algorithm do I need , what transformation of a string. String can contain one or more UNICODE Characters.
While sending ASCII only based string I convert(map) it to EBCDIC and send via TCP/IP connection. I know that EBCDIC doesn't handle UNICODE Character. Besides, I can send via TCP IP only byte array, where in case of ASCII string one character maps to one array cell. In the case of UNICODE character - it can occupy from 1 to 4 byte array cells.
The question is how do I send the UNICODE containing string to A16 Mainframe.
Further clarification:
When I run the code, the TCP client cannot receive any response. It passes timeout and gives an error. Increasing timeout does not help. C# can convert an UNC string to UTF-8 either using System.Text.Encoding or even with an algorithm - almost manually. Those are not a problem. Problem is that A16 COMS expects “one character = one byte”, (mapped to EBCDIC). And with UTF-8 one character may occupy 2, 3 or 4 cells of an array. Now EBCDIC mapping itself does not help, because EBCDIC is designed to work with non-unicode (ASCII based) strings.
I hope that someone whoever did this at some point in his career might read my post because not much can be done by figuring out. Can it be done with TCP Client and its NetworkStream? Send method has only array of bytes in its signature, but with utf-8 array of bytes can be so much longer than the limit.
It is a question asking to share experience, not knowledge.

Decode incoming SMS numbers

I use some SIMCOM GSM module to receive incoming messages. When I send SMS from my mobile phone I see my normal number:
+CMT: "+38012345678", ...
But when SMS comes from my cell operator, or some named SMS service as Google I see somу trash like here from Google:
+CMT: "16p6p6w237562767963656", ...
one more:
+CMT: "w49511#495946535451425", ...
and more:
+CMT: "#497966737471627", ...
According to module documentation this parameter named <oa> and means GSM 03.40 TP-Originating-Address Address-Value string field.
Is it possible to decode it on any programming language, e.g. from python? What can it be? If I switch to UCS2 and decode from it is absolutely the same.
According to SIM800 Series AT Command Manual v1.10, page 114:
GSM 03.40 TP-Destination-Address Address-Value field in string
format; BCD numbers (or GSM default alphabet characters) are converted
to characters of the currently selected TE character set (refer
Command +CSCS in 3GPP TS 27.007); type of address given by
If phone number in CMT message does not start with "+" sign, it is encoded with BCD numbers.
I tried to compare those numbers with ASCII table. This is not exactly BCD encoding, but looks very similar.
To decode "16p6p6w237562767963656" split it into pairs: 16 p6 p6 w2 37 56 27 67 96 36 56
then reverse each pair: 61 6p 6p 2w 73 65 72 76 69 63 65
Now compare to HEX codes in ASCII table and get the result: all services. You may wonder how to read 6p 6p 2w. I wonder either!
After searching other examples of encoded numbers I made an assumption that HEX digits 0, A-F have equivalent of different characters:
0 - w
A
B - #
C - p
D
E - +
F - #
I have no idea, why HEX digits were replaces by random letters.
"w49511#495946535451425" stands for "#Y?KYIVSTAR". The code "11" is unprintable and replaced by "?".
"#497966737471627" stands for "Kyivstar".
Are you sure your module is set to text format (AT+CMGF=1) when receiving those SMS? If you switched off your module and on again it probably is set to "PDU" mode, which is more suited for computers than humans..
See the SIMCOM AT Command manual for details, it's very extensive (380 pages pdf).

Sending non-printable ASCII codes (like 0x80) through bash and Telnet

I'm trying to send non-printable ASCII characters (codes 128 - 255) through telnet to a Ruby app using Socket objects to read data in.
When I try to send \x80 through telnet, I expect Ruby to receive a string of 3 bytes: 128 13 10.
I actually receive a string of 6 bytes: 92 120 56 48 13 10.
Do I need to change something about how telnet is sending the information, or how the Ruby socket is accepting it? I've read through all the telnet jargon I can comprehend. A point in the right direction would be very much appreciated.
92 120 56 48 13 10
is in decimal ASCII:
\ x 8 0 \r \n
So you are doing something very wrong and it's not the Telnet. The escape sequence \x80 was treated literally instead of being understood as single character of code=128.
I guess you have used '\x80' instead of "\x80". Note the different quotes. If that was a single character, you can in Ruby also use ? character to denote a character: ?\x80 so for example:
"\x80\r\n" == ?\x80 + ?\r + ?\n
=> true
while of course
'\x80\r\n' == "\x80\r\n"
=> false
--
To summarize long story from the comments:
originally data to be sent was entered manually through telnet terminal
telnet terminals often don't accept any escape codes and "just send" everything they got directly, sometimes copying&pasting a text with special characters work, sometimes terminals provide some extra UI goodies to send special characters - but this time the terminal was very basic and pasting it didn't work, and there was no UI goodies
instead of entering the data manually, sending a file through a pipe to the telnet terminal seemed to work much better. Some data arrived, but not good
piping the data to nc(netcat) instead of telnet terminal almost seemed to work, binary data arrived, but it was not perfect yet
after examining the input file (the one that was piped to nc) with hexdump utility, it turned out that the file contained not exactly what we thought, it seems that the editor used to create the file saved the text with wrong encoding, and it has added some extra unwanted bytes
finally, a utility called xxd helped to produce a good binary data from a tailored hex-text; output of xxd could be directly piped to nc (netcat)

Resources