Mq input node failing to parse xml due to unconvertable unicode characters with hexcode: c280 is received. source of incoming xml is some front end that supports all emoticons. help appreciated.
Related
While trying to implement a code of encryption, I got this thing in terminal
cms/protocol: ASN.1 Error — unexpected trailing data
What does it signify?
I am not familiar with CMS or exactly what you are doing, but based on the "unexpected trailing data" message and my knowledge of ASN.1, I would say the error message suggests that there was more data available after the end of the ASN.1 data, which suggests that the data does not represent what it was claimed to represent. For example, if you have 10 bytes of data that are supposedly a PER encoding of type X, but after decoding a value of X out of that data, only 9 bytes were read, then those 10 bytes were not a PER encoding of X.
I need to send a UNICODE string message to A16 COMS (Mainframe) via TCP/IP. What algorithm do I need , what transformation of a string. String can contain one or more UNICODE Characters.
While sending ASCII only based string I convert(map) it to EBCDIC and send via TCP/IP connection. I know that EBCDIC doesn't handle UNICODE Character. Besides, I can send via TCP IP only byte array, where in case of ASCII string one character maps to one array cell. In the case of UNICODE character - it can occupy from 1 to 4 byte array cells.
The question is how do I send the UNICODE containing string to A16 Mainframe.
Further clarification:
When I run the code, the TCP client cannot receive any response. It passes timeout and gives an error. Increasing timeout does not help. C# can convert an UNC string to UTF-8 either using System.Text.Encoding or even with an algorithm - almost manually. Those are not a problem. Problem is that A16 COMS expects “one character = one byte”, (mapped to EBCDIC). And with UTF-8 one character may occupy 2, 3 or 4 cells of an array. Now EBCDIC mapping itself does not help, because EBCDIC is designed to work with non-unicode (ASCII based) strings.
I hope that someone whoever did this at some point in his career might read my post because not much can be done by figuring out. Can it be done with TCP Client and its NetworkStream? Send method has only array of bytes in its signature, but with utf-8 array of bytes can be so much longer than the limit.
It is a question asking to share experience, not knowledge.
Don't think this has been asked this specifically.
I have to run a performance test on a application that consumes from a ActiveMQ topic. The sampler needs to publish 4 Bytes of data in to the topic which usually be the format of (if you look at the hex value) 0x006403D6.
If you translate them in to decimals, they will be
0x03D6 = 00000011 11010110 ==> 982 ==> 03,D6 ==> 03, 214 ==> 3,214
0x0064 = 00000000 01100100 ==> 100 ==> 00,64 ==> 00, 100 ==> 0,100
So the above example the 4 bytes will be [0,100,3,214].
To get this done I have used a JMSPublisher.
The below is the configuration:
Since I have to send a byte stream I thought to use a ByteMessage from a file.
I tried above with different content in the file in the configuration. But non would give me [0,100,3,214].
It looks like JMeter does convert the text, characters in to byte values. So If I have a empty file the topic will reeve 0 and application will consider it as [0,0,0,0] (Application considers only the first 4 bytes)
If I have ???? in the text file I get [63,63,63,63] (As ? ==> 00111111 ==> 63 in Decimal).
But If I have to get the first byte a 0, I am unable to get that through? As there is no character I could found to represent 0.
May be There is a better way of doing this. Please advise ?
Looking into JMS Sampler JavaDoc, the setContent() function accepts String only so there is no way to pass bytes to this as JMeter will treat them as a simple string.
However as per JMS Publisher documentation you should be able to send whatever you want given the object is serialised by XStream
The Object message is implemented and works as follow:
Put the JAR that contains your object and its dependencies in jmeter_home/lib/ folder
Serialize your object as XML using XStream
Either put result in a file suffixed with .txt or .obj or put XML content directly in Text Area
Note that if message is in a file, replacement of properties will not occur while it will if you use Text Area.
Also be aware that you can always switch to JSR223 Sampler and use ByteMessage class from Groovy code.
If a Solaris based MQ Manager is sending messages to an intermediate Windows MQ Manager that then sends on messages to a Linux MQ manager, does the CCSID need to be changed on them all to be the same? I don't think so but I am being pushed to do so...
I have an issue where a client is sending messages from an application on Solaris via an intermediate MQ Manager on Windows but the Linux MQ Manager final destination based application receives the message with CR/LF line ending characters it can't deal with. Should the receiver end group write a conversion exit program? Or an MCA?
IBM MQ does not deal with line feeds. It is not like transferring a file with ftp and using ascii mode where ftp will convert from Unix LF end of line to Window CR and LF end of line. For MQ it is just another character in the character set to translate. If the sending app is sending data that includes CR/LF characters, MQ will treat them as any other character in the data. If the receiving application expecting a file with end of lines to be sent as a MQ message it would be required to deal with the end of lines.
MQ Classes for Java applications running on a Solaris server default to CCSID 819, and a IBM MQ queue manager running on Solaris also will default to CCSID 819. CCSID 819 is described as ISO 8859-1 ASCII.
I created a file called test.txt containing the single word "test" and ran unix2dos against the file.
The output below shows the ASCII characters as HEX values:
$cat test.txt | hexdump -C
00000000 74 65 73 74 0d 0a |test..|
00000006
Note that in the above output the ASCII hex values 0d is the CR and 0a is the LF, these are the common Windows end of line.
If we assume that the default CCSID 819 is used for both the Solaris MQ Classes for Java application and the Solaris queue manager, then we can start out with the assumption that the above two hex values represent CR/LF at the end of each line.
You stated that your Windows queue manager has CCSID 437 which is typical for a US based Windows server. CCSID 437 is described as USA PC-DATA and is also ASCII.
Linux queue managers typically default to CCSID 1208. CCSID 1208 is described as UTF-8 with IBM PUA and is a Variable Byte character set it can have from 1 to four bytes per character. This can represent over a million characters including all ASCII characters. All 127 ASCII characters are represented in UTF-8 as the same single byte HEX values as in ASCII.
Going from ASCII to UTF-8 is loss less, going from UTF-8 to ASCII can have loss if non-ASCII characters are used since there is not a equivalent character in ASCII and MQ converts it to the default substitution character with HEX value 1A. I have seen this for example with a Euro symbol. Most if not all of the first 255 characters of UTF-8 are the same as CCSID 819.
Given the above CCSID assumptions the conversion would look like this:
Original IBM MQ Classes for Java app running on Solaris sending data with CR/LF end of line characters:
$cat test.txt | hexdump -C
00000000 74 65 73 74 0d 0a |test..|
00000006
Solaris Queue Manager Sender channel with CONVERT(YES) sending to Windows Queue Manager with CCSID 437:
cat test.txt | iconv -f IBM819 -t IBM437 | hexdump -C
00000000 74 65 73 74 0d 0a |test..|
00000006
As expected the output is the same since both 819 and 437 are ASCII character sets and the data was not representing any thing above the first 127 ASCII characters.
Solaris Queue Manager Sender channel with CONVERT(YES) sending to Windows Queue Manager with CCSID 437 Sender channel with CONVERT(YES) sending to Linux Queue Manager with CCSID 1208:
cat test.txt | iconv -f IBM819 -t IBM437 | iconv -f IBM437 -t UTF-8 | hexdump -C
00000000 74 65 73 74 0d 0a |test..|
00000006
As expected the output is the same since both 819 and 437 are ASCII character sets and the first 127 characters of UTF-8 (1208) are normal ASCII characters.
Summary: If the sending application is sending CR/LF in the data, MQ message conversion will not change this and if the CCSIDs in use are the above listed CCSIDs it does not even change the actual HEX character value. The sending application would need to change what they are sending or the receiving application would need to accommodate these characters.
A good reference on ASCII, UNICODE, UTF-8 and more can be found in the article "Unicode, UTF8 & Character Sets: The Ultimate Guide".
I smell a bad setup in your MQ environment and anyone who says to make them all the same doesn't know or doesn't understand MQ (& should have zero opinion about the configuration).
Also, setting the sender channel CONVERT attribute to YES is a VERY bad idea and ONLY should be used in extreme cases. see IBM Best Practices for more information. The ONLY time data conversion should be done is when the application issues an MQGET with Convert.
Since, Solaris has no clue about CR/LF, I going to guess at what is going wrong:
The message's Format field of MQMD has the value 'MQSTR' set.
The sender channel between the Solaris queue manager has the attribute CONVERT set to YES
The receiving application on Linux did NOT issue and MQGET with Convert.
Questions:
Did the Solaris application actually put the message with CR/lF?
Did the Solaris application set the MQMD Format field to 'MQSTR'?
Why are the messages hoping through the Windows queue manager? Why not make a direct connection between Solaris queue manager and the Linux queue manager?
What value does the CONVERT attribute of the Solaris sender channel have? i.e. Is it Yes or No?
The simplest solution is to have the Linux application issue an MQGET with Convert assuming the MQMD Format field is set to 'MQSTR'. MQ will automatically convert the message data to the correct platform.
If the MQMD Formet field is not set to 'MQSTR' then MQ will NOT convert the message data between platforms. That would imply that the Solaris application put the CR/LF in the message. I find this hard to believe. If a developer did this then they truly do not know MQ (and should not be programming it).
I have a modbus device I am trying to communicate with using an ethernet to RS485 device. I'm not sure whether the device uses modbus ASCII or RTU.
I am trying to format a request to a device with address 1. The command code is 11h. I'm not sure I'm formatting the request properly
Here is the string I am using for ASCII - ":010B000000000C\x0D\x0A"
Here is the hex I'm using for RTU: "\x01\x0B\x00\x00\x00\x00\x0B\xA4"
When I send this command it is echoed back but I'm not getting responses. I've been through the modbus documentation and I think I have the correct byte structure. I'm wondering if I'm encoding it right for ruby?
It turned out my ethernet to RS485 device wasn't capable of the correct timing for modbus. Once I purchased a new unit the ascii strings worked.
Are you sure the checksum should be written in pure bytes, not in ASCII? I mean, try to send :010B000000000C0D0A instead of :010B000000000C\x0D\x0A.
Also, you wrote that the command is 11h - for my understanding it is 0x11 (hex), and you are sending 0x0B. Or the command is 11 (dec)?