extra logs written when posting messages to ibm mq - jms

I have a job that posts messages onto IBM MQ.
I have configured my logs to write to a file and not console.
yet when I run this job everytime I see a large amount of logs on the console like this
I have just changed ips and company name in the logs but
what is the source of this
why does it come up
and how do I stop this ?
All my messages get posted successfully so from an end user perspective the job is working fine however I'm not able to make out why this comes up on the console?
RcvThread: com.ibm.mq.jmqi.remote.internal.RemoteTCPConnection[qmid=CANNED_DATA,fap=10,peer=naumib3.mydomain.net/112.999.138.25,localport=56857,ssl=SSL_RSA_WITH_NULL_SHA,peerDN="CN=ibmwebspheremqnaumib3, OU=For Intranet Use Only, OU=For Intranet Use Only, O=My Company, L=New York, ST=New York, C=US",issuerDN="CN=VeriSign Class 3 Secure Server CA - G3, OU=Terms of use at https://www.verisign.com/rpa (c)10, OU=VeriSign Trust Network, O="VeriSign, Inc.", C=US"], READ: SSLv3 Application Data, length = 72
main, WRITE: SSLv3 Application Data, length = 68
[Raw write]: length = 73
0000: 17 03 00 00 44 54 53 48 43 00 00 00 30 01 0C 30 ....DTSHC...0..0
0010: 00 00 00 00 00 00 00 00 00 00 00 01 11 03 33 00 ..............3.
0020: 00 00 00 00 01 00 00 00 00 00 00 00 02 00 00 00 ................
0030: 00 00 00 00 00 41 69 2A 27 7E EB 3A 9B 47 4A 02 .....Ai*'..:.GJ.RcvThread: com.ibm.mq.jmqi.remote.internal.RemoteTCPConnection[qmid=CANNED_DATA,fap=10,peer=naumib3.mydomain.net/112.999.138.25,localport=56857,ssl=SSL_RSA_WITH_NULL_SHA,peerDN="CN=ibmwebspheremqnaumib3, OU=For Intranet Use Only, OU=For Intranet Use Only, O=My Company, L=New York, ST=New York, C=US",issuerDN="CN=VeriSign Class 3 Secure Server CA - G3, OU=Terms of use at https://www.verisign.com/rpa (c)10, OU=VeriSign Trust Network, O="VeriSign, Inc.", C=US"], received EOFException: ignored
RcvThread: com.ibm.mq.jmqi.remote.internal.RemoteTCPConnection[qmid=CANNED_DATA,fap=10,peer=naumib3.mydomain.net/112.999.138.25,localport=56857,ssl=SSL_RSA_WITH_NULL_SHA,peerDN="CN=ibmwebspheremqnaumib3, OU=For Intranet Use Only, OU=For Intranet Use Only, O=My Company, L=New York, ST=New York, C=US",issuerDN="CN=VeriSign Class 3 Secure Server CA - G3, OU=Terms of use at https://www.verisign.com/rpa (c)10, OU=VeriSign Trust Network, O="VeriSign, Inc.", C=US"], called closeInternal(false)
RcvThread: com.ibm.mq.jmqi.remote.internal.RemoteTCPConnection[qmid=CANNED_DATA,fap=10,peer=naumib3.mydomain.net/112.999.138.25,localport=56857,ssl=SSL_RSA_WITH_NULL_SHA,peerDN="CN=ibmwebspheremqnaumib3, OU=For Intranet Use Only, OU=For Intranet Use Only, O=My Company, L=New York, ST=New York, C=US",issuerDN="CN=VeriSign Class 3 Secure Server CA - G3, OU=Terms of use at https://www.verisign.com/rpa (c)10, OU=VeriSign Trust Network, O="VeriSign, Inc.", C=US"], SEND SSLv3 ALERT: warning, description = close_notify

The output looks like it might be from some type of debug or diagnostic patch that might be applied to the MQ JMS/Java client you have installed. The RcvThread is the thread used internally to listen on the TCP socket for data from the QM. Do you know of a patch that might be applied to help with TCP connection issues in the past?
You might want to look at the com.ibm.mq.jmqi.jar that is contained in the MQ client you are using to see if there is a difference in the timestamp or anything noted in the manifest file within the jar file itself.

Agree with the previous answere that is not normally the format of any logs that get written by the JMS Client code. Under normal circumstances the only time that logs are written to stdout in two cases:
One the JMS Client log file controlled by
# Name(s) of the log file(s)
# Can be
# * a single pathname
# * a comma-separated list of pathnames (all data is logged to all files)
# Each pathname can be
# * absolute or relative pathname
# * "stderr" or "System.err" to represent the standard error stream
# * "stdout" or "System.out" to represent the standard output stream
com.ibm.msg.client.commonservices.log.outputName=mqjms.log
And what is called JMS Startup trace - a very early tracing system normally only used on request of IBM Service. (this is documented also in the jms.config file).

Related

Packetbeat appears to be adding DNS packets that were not really sent

I have an interesting problem with Packetbeat. Packetbeat is installed on a Debian 10 system. It is the latest version of Packetbeat (installed fresh this week from the Elastic download area) and sending data to Elastic v7.7 also installed on a Debian 10 system.
I am seeing the DNS data in the Elastic logs (when viewing them using Kibana-->Logs gui). But, I also see additional DNS packets in the log that I am not seeing on a packet analyzer running tcpdump from the same system that packetbeat is running on.
Here is the packet analyzer showing DNS calls to/from a client (10.5.52.47). The wireshark capture filter is set to 'port 53' and the display filter is set to 'ip.addr==10.5.52.47' It is running on the same system as packetbeat (for purposes of troubleshooting this issue).
Wireshark screenshot
1552 2020-06-04 20:31:34.297973 10.5.52.47 10.1.3.200 52874 53 DNS 93 Standard query 0x95f7 SRV
1553 2020-06-04 20:31:34.298242 10.1.3.200 10.5.52.47 53 52874 DNS 165 Standard query response 0x95f7 No such name SRV
1862 2020-06-04 20:32:53.002439 10.5.52.47 10.1.3.200 59308 53 DNS 90 Standard query 0xd67f SRV
1863 2020-06-04 20:32:53.002626 10.1.3.200 10.5.52.47 53 59308 DNS 162 Standard query response 0xd67f No such name SRV
1864 2020-06-04 20:32:53.004126 10.1.3.200 10.5.52.47 64594 53 DNS 84 Standard query 0xaaaa A
1867 2020-06-04 20:32:53.516716 10.1.3.200 10.5.52.47 64594 53 DNS 84 Standard query 0xaaaa A
2731 2020-06-04 20:36:34.314959 10.5.52.47 10.1.3.200 53912 53 DNS 93 Standard query 0x2631 SRV
2732 2020-06-04 20:36:34.315058 10.1.3.200 10.5.52.47 53 53912 DNS 165 Standard query response 0x2631 No such name SRV
I removed the actual DNS query info from these packets as it is not pertinent to this topic.
From the wireshark output, you can see a DNS query at 20:32:53 from 10.5.52.47 to the DNS server 10.1.3.200. The server responds to this query in the next packet. Also, there are two other responses from server after this on the same second of time.
The next DNS query by the client 10.5.52.47 occurs at 20:36:34. And this also gets an immediate response from the server.
This differs from the Kibana-->log output sent by packetbeat. In the Kibana logs, it shows the following:
Screenshot of Kibana Log showing actual DNS call(s), and multiple non-existent DNS calls (highlighted in yellow)
All the above info as captured in the packet capture
plus the following:
20:33:00.000 destination IP of 10.5.52.47 destination port of 53
Same thing at
20:33:10.000
20:33:20.000
20:33:30.000
20:33:40.000
Then at 20:36:34 it shows the DNS query that the packet capture shows.
So, these port 53 that end at 00/10/20/30/40 seconds after the minute appear to be made up from thin air. Additionally, there are no other fields being populated in the Elastic logs for these entries. client.ip is empty, and so is client.bytes, client.port, and ALL the DNS fields for these log entries. All the DNS entries that are listed in both the packet capture and Kibana, have all the expected fields populated with correct data.
Does anyone have an idea of why this is occurring? This example above is a small sample. This occurs for multiple systems at 10 seconds intervals. for example, at 10 or 20 or 30 or 40 or 50 or 60 seconds after the minute, I see between 10 to 100 (guesstimate) of these log entries where all the fields are blank except destination.ip, destination.byte, and destination.port - there is no client info and no DNS info contained in the fields for these errant records.
The 'normal' DNS records hove about 20 fields of information listed on the Kibana log, and these errant ones have only four fields (the fields listed above and the timestamp).
Here is an example of the log from one of these 10 second intervals...
timestamp Dest.ip Dest.bytes Dest.port
20:02:50.000 10.1.3.200 105 53
20:02:50.000 10.1.3.200 326 53
20:02:50.000 10.1.3.200 199 53
20:02:50.000 10.1.3.200 208 53
20:02:50.000 10.1.3.201 260 53
20:02:50.000 10.1.3.200 219 53
20:02:50.000 10.1.3.200 208 53
20:02:50.000 10.1.3.200 199 53
.
.
Plus 42 more of these at the same second
.
.
20:02:50.000 10.1.3.201 98 53
Kibana Log view of reported issue - the 'real' dns call is highlighted in yellow, the non-existent dns calls are marked by the red line - there are way more non-existent DNS calls logged than real DNS queries
And here is the packetbeat.yml file (only showing uncommented lines)
packetbeat.interfaces.device: enp0s25
packetbeat.flows:
timeout: 30s
period: 10s
packetbeat.protocols:
- type: dhcpv4
ports: [67, 68]
- type: dns
ports: [53]
include_authorities: true
include_additionals: true
setup.template.settings:
index.number_of_shards: 1
setup.dashboards.enabled: true
setup.kibana:
host: "1.1.1.1:5601"
output.elasticsearch:
hosts: ["1.1.1.2:59200"]
setup.template.overwrite: true
setup.template.enabled: true
Thank you for your thoughts on what might be causing this issue.
=======================================================================
Update on 6/8/20
I had to shutdown packetbeat due to this issue, until I can locate a resolution. One single packetbeat system generated 100 million documents over the weekend for just DNS queries. Of which 98% of them were somehow created by packetbeat and were not real DNS queries.
I stopped the packetbeat service this morning on the linux box that is capturing these DNS queries, and deleted this index. I then restarted the packetbeat instance and let it run for about 60 seconds. Then I stopped the packetbeat service. During the 60 seconds 22,119 DNS documents were added to the index. When I removed the documents packetbeat created (that were not real DNS queries), it deleted 21,391. leaving me with 728 actual DNS queries. In this case, 97% of the documents were created by packetbeat, and 3% where 'real' DNS queries made by our systems which packetbeat captured.
Any ideas as to why this behavior is being exhibited by this system?
Thank you

How and Why the last 8 bytes MUST be overwritten in TLS 1.3 as described below if negotiating TLS 1.2 or TLS 1.1?

In RFC 8446. About the random in ServerHello send by Server.
In 4.1.3. Server Hello
32 bytes generated by a secure random number generator. See Appendix C for additional information. The last 8 bytes MUST be overwritten as described below if negotiating TLS 1.2 or TLS 1.1, but the remaining bytes MUST be random. This structure is generated by the server and MUST be generated independently of the ClientHello.random.
Why and How ?
The last 8 bytes MUST be overwritten as described below if negotiating TLS 1.2 or TLS 1.1
In RFC 8446
4.1.3. Server Hello
TLS 1.3 has a downgrade protection mechanism embedded in the server's
random value. TLS 1.3 servers which negotiate TLS 1.2 or below in
response to a ClientHello MUST set the last 8 bytes of their Random
value specially in their ServerHello.
If negotiating TLS 1.2, TLS 1.3 servers MUST set the last 8 bytes of
their Random value to the bytes:
44 4F 57 4E 47 52 44 01
If negotiating TLS 1.1 or below, TLS 1.3 servers MUST, and TLS 1.2
servers SHOULD, set the last 8 bytes of their ServerHello.Random
value to the bytes:
44 4F 57 4E 47 52 44 00

Wakanda connect 4D error session

I launching a 4D method from Wakanda, I have this error :
{
"__ERROR": [
{
"message": "The maximum number of sessions has been reached",
"componentSignature": "dbmg",
"errCode": 1823
}
],
}
I see 4D databases in Wakanda.
I use 4D rest and I have a method name Test_WebService() in 4D side. In Wakanda, I call the method by ds.FA_UNITES.Test_WebService();
FA_UNITES is my table name
method Test_WebService() 4D side is $0:="Hello"
Anyone can help me ?
The "The maximum number of sessions has been reached" error indicates you do not have the appropriate licenses to use 4D Mobile.
I believe 4D Mobile Expansion is required to develop 4D Mobile applications. It grants you two 4D Mobile Client sessions. If you do have 4D Mobile Expansion, please check if it is activated properly.

BIRT report for lists of lists

I'm a bit of a BIRT newbie, and I could really do with a hand.
Some background:
I'm developing some software which allows display and simple modelling of Layer 1 connectivity in a data centre.
This is Java based, running on Tomcat, using BIRT reports. BIRT fetches the data from a general web-service we've implemented, serving data up as XML, which BIRT fetches using SOAP.
The report I'm working on currently queries our system to find out the circuit trace on a particular port on a piece of equipment.
The simple report for this works fine. It gives an ancestry path to the assets, and then the specific asset and port.
For example, asset ID 49345, port 1 would result in a report that looks (a bit) like this...
Organization >> Comms Room >> Comms Room Cabinet 02 Rack 01 >> Comms Room C02 R01 Telephone Patch Panel P1 - B1
Organization >> Comms Room >> Comms Room Cabinet 02 Rack 01 >> Comms Room C02 R01 Patch Panel 02 P2 - B2
Organization >> Client/Server development >> U1 ClntSvr FB 02 >> U1 ClntSvr FB 02 I4 P2 - B2
This says that the back of the telephone patch panel goes to the front, via a patch cord to another panel, to the back of that panel, to the back of a floor box.
This report works quite happily.
One customer wants more!
They want the Excel export from the BIT report to be filterable, i.e. rather than having a delimited ancestry path, they need it in a list form, so when it's exported to Excel, each entry is on a different column.
I've modified my query to return an array of ancestry elements rather than a single string, and in its own way, this also works.
The SOAP response for this new query is below (for information - it may help)
<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/">
<SOAP-ENV:Header/>
<SOAP-ENV:Body>
<FindCircuitByAssetAndPortResponse>
<CircuitDetail>
<FoundByAsset>Comms Room C02 R01 Telephone Patch Panel (id: 49345)</FoundByAsset>
<FoundByPort>P1</FoundByPort>
<CircuitAssetDetail>
<AssetId>49345</AssetId>
<AncestryPath>Organization >> Comms Room >> Comms Room Cabinet 02 Rack 01 >> Comms Room C02 R01 Telephone Patch Panel</AncestryPath>
<AncestryPathList>
<AncestryPathElement>Organization</AncestryPathElement>
<AncestryPathElement>Comms Room</AncestryPathElement>
<AncestryPathElement>Comms Room Cabinet 02 Rack 01</AncestryPathElement>
<AncestryPathElement>Comms Room C02 R01 Telephone Patch Panel</AncestryPathElement>
</AncestryPathList>
<AssetTypeName>Patch Panel</AssetTypeName>
<InPort>B1</InPort>
<OutPort>P1</OutPort>
</CircuitAssetDetail>
<CircuitAssetDetail>
<AssetId>49339</AssetId>
<AncestryPath>Organization >> Comms Room >> Comms Room Cabinet 02 Rack 01 >> Comms Room C02 R01 Patch Panel 02</AncestryPath>
<AncestryPathList>
<AncestryPathElement>Organization</AncestryPathElement>
<AncestryPathElement>Comms Room</AncestryPathElement>
<AncestryPathElement>Comms Room Cabinet 02 Rack 01</AncestryPathElement>
<AncestryPathElement>Comms Room C02 R01 Patch Panel 02</AncestryPathElement>
</AncestryPathList>
<AssetTypeName>Patch Panel</AssetTypeName>
<InPort>P2</InPort>
<OutPort>B2</OutPort>
</CircuitAssetDetail>
<CircuitAssetDetail>
<AssetId>48634</AssetId>
<AncestryPath>Organization >> Client/Server development >> U1 ClntSvr FB 02 >> U1 ClntSvr FB 02 I4</AncestryPath>
<AncestryPathList>
<AncestryPathElement>Organization</AncestryPathElement>
<AncestryPathElement>Client/Server development</AncestryPathElement>
<AncestryPathElement>U1 ClntSvr FB 02</AncestryPathElement>
<AncestryPathElement>U1 ClntSvr FB 02 I4</AncestryPathElement>
</AncestryPathList>
<AssetTypeName>Module</AssetTypeName>
<InPort>P2</InPort>
<OutPort>B2</OutPort>
</CircuitAssetDetail>
</CircuitDetail>
</FindCircuitByAssetAndPortResponse>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
The report data set uses the deepest layer, i.e. the ancestry elements.
When it displays the data though, there is duplicated data. For example, the data above is now shown as...
Organization B1 - P1
Comms Room B1 - P1
Comms Room Cabinet 02 Rack 01 B1 - P1
Comms Room C02 R01 Telephone Patch Panel B1 - P1
Organization P2 - B2
Comms Room P2 - B2
Comms Room Cabinet 02 Rack 01 P2 - B2
Comms Room C02 R01 Patch Panel 02 P2 - B2
Organization P2 - B2
Client/Server development P2 - B2
U1 ClntSvr FB 02 P2 - B2
U1 ClntSvr FB 02 I4 P2 - B2
This is "correct" in that we're getting back 12 "rows" via XML. The column mapping says that the element is the "current" data, the ports (P1 & B1) are "up" one level, and so on.
If I fetch the data with respect to the ancestry path list, we don't get duplicated data, but at this point, the ancestry path list isn't seen as a list, so either displays nothing at all, or just the first element from the list repeatedly, resulting in...
Organization B1 - P1
Organization
Organization
Organization
Organization P2 - B2
Organization
Organization
Organization
Organization P2 - B2
Organization
Organization
Organization
I'm 99% sure BIRT will do what I need, but I'm a newcomer to it, and I'm surprised I've got as far as I have!
This problem is non-specific, as we have other situations where we may need to fetch lists of lists.
My apologies if this has already been covered. I have looked, but it may be listed under terminology I'm not familiar with.
Many thanks.
Pete.
If you want the output like this
Organization B1 - P1
Comms Room
Comms Room Cabinet 02 Rack 01
Comms Room C02 R01 Telephone Patch Panel
Organization P2 - B2
Comms Room
Comms Room Cabinet 02 Rack 01
Comms Room C02 R01 Patch Panel 02
Organization
Client/Server development
U1 ClntSvr FB 02
U1 ClntSvr FB 02 I4
In BIRT there is a option called GROUP for this you have to do is
select the table and you will have the properties dialog in there you will have group options and in that options it will list out the column names of your query and you can select your own column to group
view this tutorial for grouping

Performance degradation using Azure CDN?

I have experimented quite a bit with CDN from Azure, and I thought i was home safe after a successful setup using a web-role.
Why the web-role?
Well, I wanted the benefits of compression and caching headers which I was unsuccessful obtaining using normal blob way. And as an added bonus; the case-sensitive constrain was eliminated also.
Enough with the choice of CDN serving; while all content before was served from the same domain, I now serve more or less all "static" content from cdn.cuemon.net. In theory, this should improve performance since browsers parallel can spread content gathering over "multiple" domains compared to one domain only.
Unfortunately this has lead to a decrease in performance which I believe has to do with number of hobs before content is being served (using a tracert command):
C:\Windows\system32>tracert -d cdn.cuemon.net
Tracing route to az162766.vo.msecnd.net [94.245.68.160]
over a maximum of 30 hops:
1 1 ms 1 ms 1 ms 192.168.1.1
2 21 ms 21 ms 21 ms 87.59.99.217
3 30 ms 30 ms 31 ms 62.95.54.124
4 30 ms 29 ms 29 ms 194.68.128.181
5 30 ms 30 ms 30 ms 207.46.42.44
6 83 ms 61 ms 59 ms 207.46.42.7
7 65 ms 65 ms 64 ms 207.46.42.13
8 65 ms 67 ms 74 ms 213.199.152.186
9 65 ms 65 ms 64 ms 94.245.68.160
C:\Windows\system32>tracert cdn.cuemon.net
Tracing route to az162766.vo.msecnd.net [94.245.68.160]
over a maximum of 30 hops:
1 1 ms 1 ms 1 ms 192.168.1.1
2 21 ms 22 ms 20 ms ge-1-1-0-1104.hlgnqu1.dk.ip.tdc.net [87.59.99.217]
3 29 ms 30 ms 30 ms ae1.tg4-peer1.sto.se.ip.tdc.net [62.95.54.124]
4 30 ms 30 ms 29 ms netnod-ix-ge-b-sth-1500.microsoft.com [194.68.128.181]
5 45 ms 45 ms 46 ms ge-3-0-0-0.ams-64cb-1a.ntwk.msn.net [207.46.42.10]
6 87 ms 59 ms 59 ms xe-3-2-0-0.fra-96cbe-1a.ntwk.msn.net [207.46.42.50]
7 68 ms 65 ms 65 ms xe-0-1-0-0.zrh-96cbe-1b.ntwk.msn.net [207.46.42.13]
8 65 ms 70 ms 74 ms 10gigabitethernet5-1.zrh-xmx-edgcom-1b.ntwk.msn.net [213.199.152.186]
9 65 ms 65 ms 65 ms cds29.zrh9.msecn.net [94.245.68.160]
As you can see from the above trace route, all external content is delayed for quite some time.
It is worth noticing, that the Azure service is setup in North Europe and I am settled in Denmark, why this trace route is a bit .. hmm .. over the top?
Another issue might be that the web-role is two extra small instances; I have not found the time yet to try with two small instances, but I know that Microsoft limits the extra small instances to a 5Mb/s WAN where small and above has 100Mb/s.
I am just unsure if this goes for CDN as well.
Anyway - any help and/or explanation is greatly appreciated.
And let me state, that I am very satisfied with the Azure platform - I am just curious in regards to the above mentioned matters.
Update
New tracert without the -d option.
Being inspired by user728584 I have researched and found this article, http://blogs.msdn.com/b/scicoria/archive/2011/03/11/taking-advantage-of-windows-azure-cdn-and-dynamic-pages-in-asp-net-caching-content-from-hosted-services.aspx, which I will investigate further in regards to public cache-control and CDN.
This does not explain the excessive hops count phenomenon, but I hope a skilled network professional can help in casting light to this matter.
Rest assured, that I will keep you posted according to my findings.
Not to state the obvious but I assume you have set the Cache-Control HTTP header to a large number so as your content is not being removed from the CDN Cache and being served from Blob Storage when you did your tracert tests?
There are quite a few edge servers near you so I would expect it to perform better: 'Windows Azure CDN Node Locations' http://msdn.microsoft.com/en-us/library/windowsazure/gg680302.aspx
Maarten Balliauw has a great article on usage and use cases for the CDN (this might help?): http://acloudyplace.com/2012/04/using-the-windows-azure-content-delivery-network/
Not sure if that helps at all, interesting...
Okay, after I'd implemented public caching-control headers, the CDN appears to do what is expected; delivering content from x-number of nodes in the CDN cluster.
The above has the constrain that it is experienced - it is not measured for a concrete validation.
However, this link support my theory: http://msdn.microsoft.com/en-us/wazplatformtrainingcourse_windowsazurecdn_topic3,
The time-to-live (TTL) setting for a blob controls for how long a CDN edge server returns a copy of the cached resource before requesting a fresh copy from its source in blob storage. Once this period expires, a new request will force the CDN server to retrieve the resource again from the original blob, at which point it will cache it again.
Which was my assumed challenge; the CDN referenced resources kept pooling the original blob.
Also, credits must be given to this link also (given by user728584); http://blogs.msdn.com/b/scicoria/archive/2011/03/11/taking-advantage-of-windows-azure-cdn-and-dynamic-pages-in-asp-net-caching-content-from-hosted-services.aspx.
And the final link for now: http://blogs.msdn.com/b/windowsazure/archive/2011/03/18/best-practices-for-the-windows-azure-content-delivery-network.aspx
For ASP.NET pages, the default behavior is to set cache control to private. In this case, the Windows Azure CDN will not cache this content. To override this behavior, use the Response object to change the default cache control settings.
So my conclusion so far for this little puzzle is that you must pay a close attention to your cache-control (which often is set to private for obvious reasons). If you skip the web-role approach, the TTL is per default 72 hours, why you may not never experience what i experienced; hence it will just work out-of-the-box.
Thanks to user728584 for pointing me in the right direction.

Resources