Cannot read TcpPacket in pcap4j - pcap4j

I am trying to use pcap4j to get TcpPackets from loopback address. I can read packets successfully but I cannot convert them to TcpPackets:
public class Main {
public static void main(String[] args) throws Exception {
PcapNetworkInterface device = null;
try {
PcapNetworkInterface nic = new NifSelector().selectNetworkInterface();
System.out.println(nic);
PcapHandle handle = nic.openLive(65536, PcapNetworkInterface.PromiscuousMode.PROMISCUOUS, 10000);
handle.setFilter(
"tcp port 8080",
BpfProgram.BpfCompileMode.OPTIMIZE
);
while (handle.isOpen()) {
Packet packet = handle.getNextPacketEx();
if (packet != null) {
TcpPacket tcpPacket = packet.get(TcpPacket.class);
System.out.println(tcpPacket);
// EthernetPacket ethernetPacket = EthernetPacket.newPacket(packet.getRawData(), 0, packet.length());
//
// Dot1qVlanTagPacket dot1qVlanTagPacket = Dot1qVlanTagPacket.newPacket(ethernetPacket.getRawData(), 0, ethernetPacket.length());
//
// IpV4Packet ipV4Packet = IpV4Packet.newPacket(dot1qVlanTagPacket.getRawData(), 0, dot1qVlanTagPacket.length());
//
// TcpPacket tcpPacket1 = TcpPacket.newPacket(ipV4Packet.getRawData(), 0, ipV4Packet.length());
// System.out.println(tcpPacket1);
}
}
} catch (Exception e) {
System.out.println(e.getClass() + " " + e.getMessage());
}
}
}
Things to note:
I am using getNextPacketEx so I can benefit from packet factory
My dependencies in pom.xml looks like
<dependencies>
<dependency>
<groupId>org.pcap4j</groupId>
<artifactId>pcap4j-core</artifactId>
<version>1.8.2</version>
</dependency>
<dependency>
<groupId>org.pcap4j</groupId>
<artifactId>pcap4j-packetfactory-propertiesbased</artifactId>
<version>1.8.2</version>
</dependency>
</dependencies>
What I have also tried is to unwrap manually the packets ( see the commented code ). That works successfully until I get an IpV4Packet which contains fragmented data.
When I try to construct the TcpPacket.newPacket... I am receiving the error:
class org.pcap4j.packet.IllegalRawDataException The data offset must be equal or more than 5, but it is: 0
What I would like to do is read tcp packets, reassemble them and then read my http content that I want to sniff. Can anyone help?
UPDATE:
This is a full dump of ipv4 packets for a simple Http get on localhost in port 8080:
name: [\Device\NPF_Loopback] description: [Adapter for loopback traffic capture] loopBack: [true]] up: [true]] running: [true]] local: [true]
[IPv4 Header (32 bytes)]
Version: 1 (unknown)
IHL: 8 (32 [bytes])
TOS: [precedence: 0 (Routine)] [tos: 0 (Default)] [mbz: 0]
Total length: 0 [bytes]
Identification: 24591
Flags: (Reserved, Don't Fragment, More Fragment) = (true, true, true)
Fragment offset: 5741 (45928 [bytes])
TTL: 0
Protocol: 32 (MERIT-INP)
Header checksum: 0x0680
Source address: /0.0.0.0
Destination address: /0.0.0.0
Option: [option-type: 0 (End of Option List)]
Padding: 0x00 00 00 00 00 00 01 00 00 00 00
[Fragmented data (44 bytes)]
Hex stream: 00 00 00 00 00 00 00 00 00 00 00 01 c3 df 1f 90 16 42 e3 94 00 00 00 00 80 02 ff ff 97 b8 00 00 02 04 ff c3 01 03 03 08 01 01 04 02
[IPv4 Header (32 bytes)]
Version: 1 (unknown)
IHL: 8 (32 [bytes])
TOS: [precedence: 0 (Routine)] [tos: 0 (Default)] [mbz: 0]
Total length: 0 [bytes]
Identification: 24590
Flags: (Reserved, Don't Fragment, More Fragment) = (true, true, false)
Fragment offset: 4790 (38320 [bytes])
TTL: 0
Protocol: 32 (MERIT-INP)
Header checksum: 0x0680
Source address: /0.0.0.0
Destination address: /0.0.0.0
Option: [option-type: 0 (End of Option List)]
Padding: 0x00 00 00 00 00 00 01 00 00 00 00
[Fragmented data (44 bytes)]
Hex stream: 00 00 00 00 00 00 00 00 00 00 00 01 1f 90 c3 df 1c b1 79 d7 16 42 e3 95 80 12 ff ff 01 1f 00 00 02 04 ff c3 01 03 03 08 01 01 04 02
[IPv4 Header (32 bytes)]
Version: 1 (unknown)
IHL: 8 (32 [bytes])
TOS: [precedence: 0 (Routine)] [tos: 0 (Default)] [mbz: 0]
Total length: 0 [bytes]
Identification: 24591
Flags: (Reserved, Don't Fragment, More Fragment) = (true, true, true)
Fragment offset: 5741 (45928 [bytes])
TTL: 0
Protocol: 20 (HMP)
Header checksum: 0x0680
Source address: /0.0.0.0
Destination address: /0.0.0.0
Option: [option-type: 0 (End of Option List)]
Padding: 0x00 00 00 00 00 00 01 00 00 00 00
[Fragmented data (32 bytes)]
Hex stream: 00 00 00 00 00 00 00 00 00 00 00 01 c3 df 1f 90 16 42 e3 95 1c b1 79 d8 50 10 27 f6 14 0c 00 00
[IPv4 Header (32 bytes)]
Version: 1 (unknown)
IHL: 8 (32 [bytes])
TOS: [precedence: 0 (Routine)] [tos: 0 (Default)] [mbz: 0]
Total length: 0 [bytes]
Identification: 24591
Flags: (Reserved, Don't Fragment, More Fragment) = (true, true, true)
Fragment offset: 5741 (45928 [bytes])
TTL: 0
Protocol: 106 (QNX)
Header checksum: 0x0680
Source address: /0.0.0.0
Destination address: /0.0.0.0
Option: [option-type: 0 (End of Option List)]
Padding: 0x00 00 00 00 00 00 01 00 00 00 00
[Fragmented data (118 bytes)]
Hex stream: 00 00 00 00 00 00 00 00 00 00 00 01 c3 df 1f 90 16 42 e3 95 1c b1 79 d8 50 18 27 f6 c3 af 00 00 47 45 54 20 2f 67 72 65 65 74 69 6e 67 20 48 54 54 50 2f 31 2e 31 0d 0a 48 6f 73 74 3a 20 6c 6f 63 61 6c 68 6f 73 74 3a 38 30 38 30 0d 0a 55 73 65 72 2d 41 67 65 6e 74 3a 20 63 75 72 6c 2f 37 2e 35 35 2e 31 0d 0a 41 63 63 65 70 74 3a 20 2a 2f 2a 0d 0a 0d 0a
[IPv4 Header (32 bytes)]
Version: 1 (unknown)
IHL: 8 (32 [bytes])
TOS: [precedence: 0 (Routine)] [tos: 0 (Default)] [mbz: 0]
Total length: 0 [bytes]
Identification: 24590
Flags: (Reserved, Don't Fragment, More Fragment) = (true, true, false)
Fragment offset: 4790 (38320 [bytes])
TTL: 0
Protocol: 20 (HMP)
Header checksum: 0x0680
Source address: /0.0.0.0
Destination address: /0.0.0.0
Option: [option-type: 0 (End of Option List)]
Padding: 0x00 00 00 00 00 00 01 00 00 00 00
[Fragmented data (32 bytes)]
Hex stream: 00 00 00 00 00 00 00 00 00 00 00 01 1f 90 c3 df 1c b1 79 d8 16 42 e3 eb 50 10 27 f6 13 b6 00 00
[IPv4 Header (32 bytes)]
Version: 1 (unknown)
IHL: 8 (32 [bytes])
TOS: [precedence: 0 (Routine)] [tos: 0 (Default)] [mbz: 0]
Total length: 0 [bytes]
Identification: 24590
Flags: (Reserved, Don't Fragment, More Fragment) = (true, true, false)
Fragment offset: 4790 (38320 [bytes])
TTL: 0
Protocol: 175 (unknown)
Header checksum: 0x0680
Source address: /0.0.0.0
Destination address: /0.0.0.0
Option: [option-type: 0 (End of Option List)]
Padding: 0x00 00 00 00 00 00 01 00 00 00 00
[Fragmented data (187 bytes)]
Hex stream: 00 00 00 00 00 00 00 00 00 00 00 01 1f 90 c3 df 1c b1 79 d8 16 42 e3 eb 50 18 27 f6 5e cc 00 00 48 54 54 50 2f 31 2e 31 20 32 30 30 20 0d 0a 43 6f 6e 74 65 6e 74 2d 54 79 70 65 3a 20 61 70 70 6c 69 63 61 74 69 6f 6e 2f 6a 73 6f 6e 0d 0a 54 72 61 6e 73 66 65 72 2d 45 6e 63 6f 64 69 6e 67 3a 20 63 68 75 6e 6b 65 64 0d 0a 44 61 74 65 3a 20 57 65 64 2c 20 32 38 20 4f 63 74 20 32 30 32 30 20 31 30 3a 31 32 3a 35 33 20 47 4d 54 0d 0a 0d 0a 32 33 0d 0a 7b 22 69 64 22 3a 36 36 2c 22 63 6f 6e 74 65 6e 74 22 3a 22 48 65 6c 6c 6f 2c 20 57 6f 72 6c 64 21 22 7d 0d 0a
[IPv4 Header (32 bytes)]
Version: 1 (unknown)
IHL: 8 (32 [bytes])
TOS: [precedence: 0 (Routine)] [tos: 0 (Default)] [mbz: 0]
Total length: 0 [bytes]
Identification: 24591
Flags: (Reserved, Don't Fragment, More Fragment) = (true, true, true)
Fragment offset: 5741 (45928 [bytes])
TTL: 0
Protocol: 20 (HMP)
Header checksum: 0x0680
Source address: /0.0.0.0
Destination address: /0.0.0.0
Option: [option-type: 0 (End of Option List)]
Padding: 0x00 00 00 00 00 00 01 00 00 00 00
[Fragmented data (32 bytes)]
Hex stream: 00 00 00 00 00 00 00 00 00 00 00 01 c3 df 1f 90 16 42 e3 eb 1c b1 7a 73 50 10 27 f5 13 1c 00 00
[IPv4 Header (32 bytes)]
Version: 1 (unknown)
IHL: 8 (32 [bytes])
TOS: [precedence: 0 (Routine)] [tos: 0 (Default)] [mbz: 0]
Total length: 0 [bytes]
Identification: 24590
Flags: (Reserved, Don't Fragment, More Fragment) = (true, true, false)
Fragment offset: 4790 (38320 [bytes])
TTL: 0
Protocol: 25 (Leaf-1)
Header checksum: 0x0680
Source address: /0.0.0.0
Destination address: /0.0.0.0
Option: [option-type: 0 (End of Option List)]
Padding: 0x00 00 00 00 00 00 01 00 00 00 00
[Fragmented data (37 bytes)]
Hex stream: 00 00 00 00 00 00 00 00 00 00 00 01 1f 90 c3 df 1c b1 7a 73 16 42 e3 eb 50 18 27 f6 ce f3 00 00 30 0d 0a 0d 0a
[IPv4 Header (32 bytes)]
Version: 1 (unknown)
IHL: 8 (32 [bytes])
TOS: [precedence: 0 (Routine)] [tos: 0 (Default)] [mbz: 0]
Total length: 0 [bytes]
Identification: 24591
Flags: (Reserved, Don't Fragment, More Fragment) = (true, true, true)
Fragment offset: 5741 (45928 [bytes])
TTL: 0
Protocol: 20 (HMP)
Header checksum: 0x0680
Source address: /0.0.0.0
Destination address: /0.0.0.0
Option: [option-type: 0 (End of Option List)]
Padding: 0x00 00 00 00 00 00 01 00 00 00 00
[Fragmented data (32 bytes)]
Hex stream: 00 00 00 00 00 00 00 00 00 00 00 01 c3 df 1f 90 16 42 e3 eb 1c b1 7a 78 50 10 27 f5 13 17 00 00
[IPv4 Header (32 bytes)]
Version: 1 (unknown)
IHL: 8 (32 [bytes])
TOS: [precedence: 0 (Routine)] [tos: 0 (Default)] [mbz: 0]
Total length: 0 [bytes]
Identification: 24591
Flags: (Reserved, Don't Fragment, More Fragment) = (true, true, true)
Fragment offset: 5741 (45928 [bytes])
TTL: 0
Protocol: 20 (HMP)
Header checksum: 0x0680
Source address: /0.0.0.0
Destination address: /0.0.0.0
Option: [option-type: 0 (End of Option List)]
Padding: 0x00 00 00 00 00 00 01 00 00 00 00
[Fragmented data (32 bytes)]
Hex stream: 00 00 00 00 00 00 00 00 00 00 00 01 c3 df 1f 90 16 42 e3 eb 1c b1 7a 78 50 11 27 f5 13 16 00 00
[IPv4 Header (32 bytes)]
Version: 1 (unknown)
IHL: 8 (32 [bytes])
TOS: [precedence: 0 (Routine)] [tos: 0 (Default)] [mbz: 0]
Total length: 0 [bytes]
Identification: 24590
Flags: (Reserved, Don't Fragment, More Fragment) = (true, true, false)
Fragment offset: 4790 (38320 [bytes])
TTL: 0
Protocol: 20 (HMP)
Header checksum: 0x0680
Source address: /0.0.0.0
Destination address: /0.0.0.0
Option: [option-type: 0 (End of Option List)]
Padding: 0x00 00 00 00 00 00 01 00 00 00 00
[Fragmented data (32 bytes)]
Hex stream: 00 00 00 00 00 00 00 00 00 00 00 01 1f 90 c3 df 1c b1 7a 78 16 42 e3 ec 50 10 27 f6 13 15 00 00
[IPv4 Header (32 bytes)]
Version: 1 (unknown)
IHL: 8 (32 [bytes])
TOS: [precedence: 0 (Routine)] [tos: 0 (Default)] [mbz: 0]
Total length: 0 [bytes]
Identification: 24590
Flags: (Reserved, Don't Fragment, More Fragment) = (true, true, false)
Fragment offset: 4790 (38320 [bytes])
TTL: 0
Protocol: 20 (HMP)
Header checksum: 0x0680
Source address: /0.0.0.0
Destination address: /0.0.0.0
Option: [option-type: 0 (End of Option List)]
Padding: 0x00 00 00 00 00 00 01 00 00 00 00
[Fragmented data (32 bytes)]
Hex stream: 00 00 00 00 00 00 00 00 00 00 00 01 1f 90 c3 df 1c b1 7a 78 16 42 e3 ec 50 11 27 f6 13 14 00 00
[IPv4 Header (32 bytes)]
Version: 1 (unknown)
IHL: 8 (32 [bytes])
TOS: [precedence: 0 (Routine)] [tos: 0 (Default)] [mbz: 0]
Total length: 0 [bytes]
Identification: 24591
Flags: (Reserved, Don't Fragment, More Fragment) = (true, true, true)
Fragment offset: 5741 (45928 [bytes])
TTL: 0
Protocol: 20 (HMP)
Header checksum: 0x0680
Source address: /0.0.0.0
Destination address: /0.0.0.0
Option: [option-type: 0 (End of Option List)]
Padding: 0x00 00 00 00 00 00 01 00 00 00 00
[Fragmented data (32 bytes)]
Hex stream: 00 00 00 00 00 00 00 00 00 00 00 01 c3 df 1f 90 16 42 e3 ec 1c b1 7a 79 50 10 27 f5 13 15 00 00

The pcap4j repository contains an example of how to deal with fragmented packets:
https://github.com/kaitoy/pcap4j/blob/v1/pcap4j-sample/src/main/java/org/pcap4j/sample/DefragmentEcho.java
Basically you need to group the IpV4Packets based on their header.identification
Packet packet = handle.getNextPacketEx();
Short id = packet.get(IpV4Packet.class).getHeader().getIdentification();
and once you have all the fragments, you can reassemble the whole payload:
//list of fragments with the same id
List<IpV4Packet> list;
final IpV4Packet defragmentedIpV4Packet = IpV4Helper.defragment(list);
Edit (in response to the added pcap dump file)
I've noticed (in the output.pcap file that you've linked below in the comment) that you are using IPv6. I've tried using that file as a source of the packets, i.e.
PcapHandle handle = Pcaps.openOffline("output.pcap");
and those packets were not fully parsed. The reason for that was that the Protocol family was not recognized.
The correct value of IPv6 protocol family is platform specific and unless you specify a custom value, pcap4j uses this code to obtain a default value for respective platforms: Pcap4jPropertiesLoader.
Since the value 24 used in your captured packets is not listed among the default values, you need custom configuration.
One way to do that is via System.properties:
//associate value 24 with the IPv6 protocol family
System.setProperty("org.pcap4j.af.inet6", "24");
//set the class which should parse the packets for that family
System.setProperty("org.pcap4j.packet.Packet.classFor.org.pcap4j.packet.namednumber.ProtocolFamily.24", "org.pcap4j.packet.IpV6Packet");
After that, parsing via:
Packet packet = handle.getNextPacketEx();
TcpPacket tcpPacket = packet.get(TcpPacket.class)
should work normally.

Related

Parsing an ASCII text file using GAWK

I have been trying to parse an ASCII text file of the following format --
0 0 0x2de0 [0x98]: PERF_RECORD_MMAP -1/0: [0xffffffffc06ae000(0x5000) # 0]: x /lib/modules/4.4.0-83-generic/kernel/net/ipv4/netfilter/nf_reject_ipv4.ko
0x2e78 [0x90]: event: 1
.
. ... raw event: size 144 bytes
. 0000: 01 00 00 00 01 00 90 00 ff ff ff ff 00 00 00 00 ................
. 0010: 00 30 6b c0 ff ff ff ff 00 50 00 00 00 00 00 00 .0k......P......
. 0020: 00 00 00 00 00 00 00 00 2f 6c 69 62 2f 6d 6f 64 ......../lib/mod
. 0030: 75 6c 65 73 2f 34 2e 34 2e 30 2d 38 33 2d 67 65 ules/4.4.0-83-ge
. 0040: 6e 65 72 69 63 2f 6b 65 72 6e 65 6c 2f 6e 65 74 neric/kernel/net
. 0050: 2f 69 70 76 34 2f 6e 65 74 66 69 6c 74 65 72 2f /ipv4/netfilter/
. 0060: 69 70 74 5f 52 45 4a 45 43 54 2e 6b 6f 00 2e 6b ipt_REJECT.ko..k
. 0070: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
. 0080: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0 0 0x2e78 [0x90]: PERF_RECORD_MMAP -1/0: [0xffffffffc06b3000(0x5000) # 0]: x /lib/modules/4.4.0-83-generic/kernel/net/ipv4/netfilter/ipt_REJECT.ko
0x2f08 [0x88]: event: 1
.
. ... raw event: size 136 bytes
. 0000: 01 00 00 00 01 00 88 00 ff ff ff ff 00 00 00 00 ................
. 0010: 00 80 6b c0 ff ff ff ff 00 50 00 00 00 00 00 00 ..k......P......
. 0020: 00 00 00 00 00 00 00 00 2f 6c 69 62 2f 6d 6f 64 ......../lib/mod
. 0030: 75 6c 65 73 2f 34 2e 34 2e 30 2d 38 33 2d 67 65 ules/4.4.0-83-ge
. 0040: 6e 65 72 69 63 2f 6b 65 72 6e 65 6c 2f 6e 65 74 neric/kernel/net
. 0050: 2f 6e 65 74 66 69 6c 74 65 72 2f 78 74 5f 74 63 /netfilter/xt_tc
. 0060: 70 75 64 70 2e 6b 6f 00 00 00 00 00 00 00 00 00 pudp.ko.........
. 0070: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
. 0080: 00 00 00 00 00 00 00 00
........[some other data]........
0x11590 [0x30]: PERF_RECORD_AUXTRACE size: 0x2002a0 offset: 0 ref: 0x2d44e6441a3c2 idx: 0 tid: -1 cpu: 0
.
. ... Intel Processor Trace data: size 2097824 bytes
. 00000000: 02 82 02 82 02 82 02 82 02 82 02 82 02 82 02 82 PSB
. 00000010: 00 00 00 PAD
. 00000013: 99 20 MODE.TSX TXAbort:0 InTX:0
. 00000015: 99 01 MODE.Exec 64
. 00000017: 7d 08 45 06 81 ff ff 00 FUP 0xffff81064508
. 0000001f: 00 00 00 00 00 00 00 PAD
. 00000026: 02 43 00 76 49 1f 00 00 PIP 0xfa4bb00 (NR=0)
. 0000002e: 00 00 00 00 00 00 00 00 PAD
--- continued ---
The file will have several headers - as you can see in my snippet here.
PERF_RECORD_MMAP and PERF_RECORD_AUXTRACE
There will be other headers in the file as well.
What I want is that all the headers having PERF_RECORD_AUXTRACE in my text file should only be considered. All the data following the PERF_RECORD_AUXTRACE in my file should only be collected (i.e. all of the data starting with Intel Processor Trace Data). The PERF_RECORD_AUXTRACE header also has a size field with the use of which I can specify how much of data is there to be collected within the PERF_RECORD_AUXTRACE header.
Edit #1 :
So basically, given the above input file snippet, I want the output to be of the following form (all the lines after record containing PERF_RECORD_AUXTRACE)...
.
. ... Intel Processor Trace data: size 2097824 bytes
. 00000000: 02 82 02 82 02 82 02 82 02 82 02 82 02 82 02 82 PSB
. 00000010: 00 00 00 PAD
. 00000013: 99 20 MODE.TSX TXAbort:0 InTX:0
. 00000015: 99 01 MODE.Exec 64
. 00000017: 7d 08 45 06 81 ff ff 00 FUP 0xffff81064508
. 0000001f: 00 00 00 00 00 00 00 PAD
. 00000026: 02 43 00 76 49 1f 00 00 PIP 0xfa4bb00 (NR=0)
. 0000002e: 00 00 00 00 00 00 00 00 PAD
--- continued ---
EDIT #2 : This is another requirement that I have --
If I have an input snippet like below --
0 0 0x230 [0x60]: PERF_RECORD_MMAP -1/0: [0xffffffff81000000(0x3f000000) # 0xffffffff81000000]: x [kernel.kallsyms]_text
0x290 [0x88]: event: 1
.
. ... raw event: size 136 bytes
. 0000: 01 00 00 00 01 00 88 00 ff ff ff ff 00 00 00 00 ................
. 0010: 00 00 00 c0 ff ff ff ff 00 90 00 00 00 00 00 00 ................
. 0020: 00 00 00 00 00 00 00 00 2f 6c 69 62 2f 6d 6f 64 ......../lib/mod
. 0030: 75 6c 65 73 2f 34 2e 34 2e 30 2d 38 33 2d 67 65 ules/4.4.0-83-ge
. 0040: 6e 65 72 69 63 2f 6b 65 72 6e 65 6c 2f 64 72 69 neric/kernel/dri
. 0050: 76 65 72 73 2f 61 74 61 2f 6c 69 62 61 68 63 69 vers/ata/libahci
. 0060: 2e 6b 6f 00 00 00 00 00 00 00 00 00 00 00 00 00 .ko.............
. 0070: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
. 0080: 00 00 00 00 00 00 00 00 ........
0x11590 [0x30]: PERF_RECORD_AUXTRACE size: 0x2002a0 offset: 0 ref: 0x2d44e6441a3c2 idx: 0 tid: -1 cpu: 0
.
. ... Intel Processor Trace data: size 2097824 bytes
. 00000000: 02 82 02 82 02 82 02 82 02 82 02 82 02 82 02 82 PSB
. 00000010: 00 00 00 PAD
. 00000013: 99 20 MODE.TSX TXAbort:0 InTX:0
. 00000015: 99 01 MODE.Exec 64
. 00000017: 7d 08 45 06 81 ff ff 00 FUP 0xffff81064508
. 0000001f: 00 00 00 00 00 00 00 PAD
. 00000026: 02 43 00 76 49 1f 00 00 PIP 0xfa4bb00 (NR=0)
. 0000002e: 00 00 00 00 00 00 00 00 PAD
. 00000036: 02 c8 c2 3a 7c 00 00 00 VMCS 0x7c3ac2
0 0 0x290 [0x88]: PERF_RECORD_MMAP -1/0: [0xffffffffc0000000(0x9000) # 0]: x /lib/modules/4.4.0-83-generic/kernel/drivers/ata/libahci.ko
0x318 [0x98]: event: 1
.
. ... raw event: size 152 bytes
. 0000: 01 00 00 00 01 00 98 00 ff ff ff ff 00 00 00 00 ................
. 0010: 00 90 00 c0 ff ff ff ff 00 50 00 00 00 00 00 00 .........P......
. 0020: 00 00 00 00 00 00 00 00 2f 6c 69 62 2f 6d 6f 64 ......../lib/mod
. 0030: 75 6c 65 73 2f 34 2e 34 2e 30 2d 38 33 2d 67 65 ules/4.4.0-83-ge
. 0040: 6e 65 72 69 63 2f 6b 65 72 6e 65 6c 2f 64 72 69 neric/kernel/dri
. 0050: 76 65 72 73 2f 76 69 64 65 6f 2f 66 62 64 65 76 vers/video/fbdev
. 0060: 2f 63 6f 72 65 2f 66 62 5f 73 79 73 5f 66 6f 70 /core/fb_sys_fop
. 0070: 73 2e 6b 6f 00 00 00 00 00 00 00 00 00 00 00 00 s.ko............
. 0080: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
. 0090: 00 00 00 00 00 00 00 00 ........
0x11590 [0x30]: PERF_RECORD_AUXTRACE size: 0x2002a0 offset: 0 ref: 0x2d44e6441a3c2 idx: 0 tid: -1 cpu: 0
.
. ... Intel Processor Trace data: size 2097824 bytes
. 00000000: 02 82 02 82 02 82 02 82 02 82 02 82 02 82 02 82 PSB
. 00000010: 00 00 00 PAD
. 00000013: 99 20 MODE.TSX TXAbort:0 InTX:0
. 00000015: 99 01 MODE.Exec 64
. 00000017: 7d 08 45 06 81 ff ff 00 FUP 0xffff81064508
. 0000001f: 00 00 00 00 00 00 00 PAD
. 00000026: 02 43 00 76 49 1f 00 00 PIP 0xfa4bb00 (NR=0)
. 0000002e: 00 00 00 00 00 00 00 00 PAD
. 00000036: 02 c8 c2 3a 7c 00 00 00 VMCS 0x7c3ac2
I only would need the data under the records containing PERF_RECORD_AUXTRACE just like this. It would be great if the first line that contains
Intel Processor Trace Data : size 2097824 bytes
can also be avoided from my output.
. 00000000: 02 82 02 82 02 82 02 82 02 82 02 82 02 82 02 82 PSB
. 00000010: 00 00 00 PAD
. 00000013: 99 20 MODE.TSX TXAbort:0 InTX:0
. 00000015: 99 01 MODE.Exec 64
. 00000017: 7d 08 45 06 81 ff ff 00 FUP 0xffff81064508
. 0000001f: 00 00 00 00 00 00 00 PAD
. 00000026: 02 43 00 76 49 1f 00 00 PIP 0xfa4bb00 (NR=0)
. 0000002e: 00 00 00 00 00 00 00 00 PAD
. 00000000: 02 82 02 82 02 82 02 82 02 82 02 82 02 82 02 82 PSB
. 00000010: 00 00 00 PAD
. 00000013: 99 20 MODE.TSX TXAbort:0 InTX:0
. 00000015: 99 01 MODE.Exec 64
. 00000017: 7d 08 45 06 81 ff ff 00 FUP 0xffff81064508
. 0000001f: 00 00 00 00 00 00 00 PAD
. 00000026: 02 43 00 76 49 1f 00 00 PIP 0xfa4bb00 (NR=0)
. 0000002e: 00 00 00 00 00 00 00 00 PAD
Edit #3 : This is what I initially tried to do.. but which obviously does not work!
cat "$file" | gawk -F' ' -- '
/PERF_RECORD_AUXTRACE / {
offset = strtonum($1)
hsize = strtonum(substr($2, 2))
size = strtonum($5)
idx = strtonum($11)
ext = ""
ofile = sprintf("raw-pt.txt")
begin = offset + hsize
cmd = sprintf("dd if=%s of=%s conv=notrunc oflag=append ibs=1 " \
"count=%d status=none", file, ofile, size)
#!cmd = sprintf("sed p")
if (dry_run != 0) {
print cmd
}
else {
system(cmd)
}
}
I am not quite sure how can I properly parse this file to exactly get what I want. I also am not sure if using Python would help.
How to resolve this ?
To get the output you say you want from the input you posted is just:
awk 'f; /PERF_RECORD_AUXTRACE/{f=1}' file
If that's not actually all you want then edit your question to clarify your requirements and provide different sample input/output that more truly demonstrates your problem if necessary.

How to extract bitmap images from .slide file generated by a CytoVision Platform

I am working with neural network to classify images.
I have some files generated by a CytoVision Platform. I would like to use the images in those files but I need to extract them somehow.
These .slide files contain several images of apparently 16kb each one.
I have developed a program in C that I am currently running on linux to extract each 16kb in files. I should build a header in order to use those images.
I don't know which format they have.
If I look at the entire file as a bitmap with FileAlyzer I can see this:
File as a bitmap
This link should allow anyone to download an example file:
https://ufile.io/2ibdq
This is what it seems to be one image header:
42 4D 31 00 00 00 00 00 40 8F 40 05 00 9E 5F 98 D7 47 60 A1 40 01 04 4D 65 74 31 00 00 00 00 00 40 8F 40 05 00 64 31 2E 29 B5 46 DC 40 01 04 4D 65 74 32 00 00 00 00 00 40 8F 40 05 00 87 7D 26 70 88 C0 C5 40 01 04 4D 65 74 33 00 00 00 00 00 40 8F 40 05 00 C8 97 53 05 BB 0D 0F 41 01 04 54 65 78 31 00 00 00 00 00 00 D0 40 05 00 00 00 00 00 00 40 5C 40 07 04 54 65 78 32 00 00 00 00 00 00 D0 40 05 00 00 00 00 00 00 00 44 40 07 04 54 65 78 33 00 00 00 00 00 00 D0 40 05 00 00 00 00 00 00 90 76 40 07 04 54 65 78 34 00 00 00 00 00 00 D0 40 05 00 00 00 00 00 00 F4 CD 40 07 0A 43 68 72 6F 6D 73 41 72 65 61 00 00 00 00 00 4C BD 40 05 00 F3 76 84 D3 82 85 74 40 07 08 42 6F 75 6E 64 61 72 79 00 00 00 00 00 88 B3 40 05 00 D9 CE F7 53 E3 AD 7E 40 07 04 41 72 65 61 00 00 00 00 00 88 B3 40 05 00 20 EF 55 2B 13 0B 85 40 07 07 4F 62 6A 65 63 74 73 00 00 00 00 00 00 69 40 05 00 00 00 00 00 00 00 18 40 03 04 43 69 72 63 00 00 00 00 00 40 8F 40 05 00 9D E5 51 0E 5C 34 65 40 03 03 42 47 52 00 00 00 00 00 40 8F 40 05 00 7D 0C CE C7 E0 AC 86 40 03 04 54 65 78 35 00 00 00 00 00 00 D0 40 05 00 00 00 00 00 00 00 53 40 07 04 41 52 41 54 00 00 00 00 00 40 8F 40 05 00 86 89 F7 23 A7 79 7E 40 07 05 43 6C 61 73 73 00 00 00 00 00 00 F0 3F 05 00 00 00 00 00 00 00 F0 BF 00 01 00 00 00 01 00 00 00
With notepad++ I can see the previous hex like this:
BM1 #? ??G`?Met1 #? d1.)??Met2 #? ?&p?bMet3 #? ?S?ATex1 ? #\#Tex2 ? D#Tex3 ? ?#Tex4 ? ??
ChromsArea L? ???t#Boundary ?# ???~#Area ?# ?bObjects i# #Circ #? ?Q\4e#BGR #? }??#Tex5 ? S#ARAT #? Ð??~#Class ?? ?? #
Hope someone can give me an idea about the format of the images and what info I can extract from the header.

Conflicting answers when I try to find out endian-ness of my Macbook Pro

I have a Macbook Pro and am getting contradictory answers when I try to determine its endian-ness.
Method 1
python -c "import sys;print sys.byteorder" tells me I am on a little endian system
Method 2
I have a text file. I used iconv to convert it into UTF16. Its supposed to detect the endianness of the computer and convert it into that format. So here I go:
iconv -f utf-8 -t utf-16 file.txt > utf16.txt
file utf16.txt
utf16.txt: Big-endian UTF-16 Unicode English text
vi utf16.txt works and hexdump -C utf16.txt shows:
00000000 fe ff 00 33 00 39 00 38 00 31 00 36 00 30 00 38 |...3.9.8.1.6.0.8|
00000010 00 09 00 54 00 69 00 61 00 20 00 4a 00 75 00 61 |...T.i.a. .J.u.a|
00000020 00 6e 00 61 00 20 00 52 00 69 00 76 00 65 00 72 |.n.a. .R.i.v.e.r|
00000030 00 09 00 54 00 69 00 61 00 20 00 4a 00 75 00 61 |...T.i.a. .J.u.a|
00000040 00 6e 00 61 00 20 00 52 00 69 00 76 00 65 00 72 |.n.a. .R.i.v.e.r|
00000050 00 09 00 52 00 69 00 6f 00 20 00 54 00 69 00 61 |...R.i.o. .T.i.a|
00000060 00 6a 00 75 00 61 00 6e 00 61 00 2c 00 52 00 69 |.j.u.a.n.a.,.R.i|
00000070 00 6f 00 20 00 54 00 69 00 6a 00 75 00 61 00 6e |.o. .T.i.j.u.a.n|
00000080 00 61 00 2c 00 52 00 ed 00 6f 00 20 00 54 00 69 |.a.,.R...o. .T.i|
00000090 00 6a 00 75 00 61 00 6e 00 61 00 2c 00 54 00 69 |.j.u.a.n.a.,.T.i|
if I convert it to little-endian and manually insert a BOM like this:
( printf "\xff\xfe" ; iconv -f utf-8 -t utf-16le file.txt ) > UTF16LEBOM.txt
file UTF16LEBOM.txt
UTF16LEBOM.txt: Little-endian UTF-16 Unicode English text
vi UTF16LEBOM.txt works
and hexdump -C UTF16LEBOM.txt shows
00000000 ff fe 33 00 39 00 38 00 31 00 36 00 30 00 38 00 |..3.9.8.1.6.0.8.|
00000010 09 00 54 00 69 00 61 00 20 00 4a 00 75 00 61 00 |..T.i.a. .J.u.a.|
00000020 6e 00 61 00 20 00 52 00 69 00 76 00 65 00 72 00 |n.a. .R.i.v.e.r.|
00000030 09 00 54 00 69 00 61 00 20 00 4a 00 75 00 61 00 |..T.i.a. .J.u.a.|
00000040 6e 00 61 00 20 00 52 00 69 00 76 00 65 00 72 00 |n.a. .R.i.v.e.r.|
00000050 09 00 52 00 69 00 6f 00 20 00 54 00 69 00 61 00 |..R.i.o. .T.i.a.|
00000060 6a 00 75 00 61 00 6e 00 61 00 2c 00 52 00 69 00 |j.u.a.n.a.,.R.i.|
00000070 6f 00 20 00 54 00 69 00 6a 00 75 00 61 00 6e 00 |o. .T.i.j.u.a.n.|
00000080 61 00 2c 00 52 00 ed 00 6f 00 20 00 54 00 69 00 |a.,.R...o. .T.i.|
00000090 6a 00 75 00 61 00 6e 00 61 00 2c 00 54 00 69 00 |j.u.a.n.a.,.T.i.|
From this link:
The other approach is to include a magic number, such as 0xFEFF,
before every piece of data. If you read the magic number and it is
0xFEFF, it means the data is in the same format as your machine, and
all is well.
If you read the magic number and it is 0xFFFE (it is backwards), it
means the data was written in a format different from your own. You'll
have to translate it.
Who is right and why am I getting contradictory answers?
"Endian-ness of my Macbook Pro" means nothing. You need to be more specific; different applications will have different impressions. As you've just seen, you can arbitrarily encode bytes in a file. In the end, a series of bytes is just that, and files are ultimately just a series of bytes that can be read in either fashion. In the context of programming (Stack Overflow) what's important is knowing a) Whether the input you are getting is in Big Endian or Little Endian, and b) Whether the output you send should be in Big Endian or Little Endian.
If your question is the conventional reading of files, the answer is usually Little Endian. But, for example, network data tends to be Big Endian.

What do these bytes do?

This is the hexdump of a black 1x1 PNG made in Gimp and exported with minimal information:
89 50 4E 47 0D 0A 1A 0A 00 00 00 0D 49 48 44 52
00 00 00 01 00 00 00 01 08 02 00 00 00 90 77 53
DE 00 00 00 0C 49 44 41 54 08 D7 63 60 60 60 00
00 00 04 00 01 27 34 27 0A 00 00 00 00 49 45 4E
44 AE 42 60 82
Now after reading the specification I am quite sure what most of them mean, except for bytes 30-34 between the IHDR and IDAT chunk: 90 77 53 DE
Can someone enlighten me?
Those numbers are the CRC checksum for the previous chunk. See in the official specification: 5 Datastream structure for a general overview, and in particular 5.3 Chunk layout.
A CRC is calculated for, and appended to each separate chunk:
A four-byte CRC (Cyclic Redundancy Code) calculated on the preceding bytes in the chunk, including the chunk type field and chunk data fields, but not including the length field. The CRC can be used to check for corruption of the data. The CRC is always present, even for chunks containing no data.
Here is your 1x1 pixel image, annotated byte for byte. Right after the data of each of the chunks IHDR, IDAT, and IEND is a CRC for the preceding data.
File: test.png
89 50 4E 47 0D 0A 1A 0A
Header 0x89 "PNG" CR LF ^Z LF checks out okay
===========
00 00 00 0D
49 48 44 52
00 00 00 01 00 00 00 01 08 02 00 00 00
90 77 53 DE
block: "IHDR", 13 bytes [49484452]
Width: 1
Height: 1
Bit depth: 8
Color type: 2 = Color
(Bits per pixel: 8)
(Bytes per pixel: 3)
Compression method: 0
Filter method: 0
Interlace method: 0 (none)
CRC: 907753DE
===========
00 00 00 0C
49 44 41 54
08 D7 63 60 60 60 00 00 00 04 00 01
27 34 27 0A
block: "IDAT", 12 bytes [49444154]
expanded result: 4 (as expected)
(Row 0 Filter:0)
decompresses into
00 00 00 00
CRC: 2734270A
===========
00 00 00 00
49 45 4E 44
AE 42 60 82
block: "IEND", 0 bytes [49454E44]
CRC: AE426082
The IDAT data decompresses into four 0's: the first one is the row filter (0, meaning 'none') and the next 3 bytes are Red, Green, Blue values for the one single pixel.

javax.net.ssl.SSLException: Received fatal alert: unexpected_message in Java7

We have a https client that connects to a webservice over ssl. This always works fine with Java 1.6.
Last week we switch the client to use Java 1.7. Unfortunately the client is no longer able to connect to the webservice. I want to know what is causing this and how to fix it?
And the client throws the following exception:
javax.net.ssl.SSLException: Received fatal alert: unexpected_message
at sun.security.ssl.Alerts.getSSLException(Alerts.java:208)
at sun.security.ssl.Alerts.getSSLException(Alerts.java:154)
at sun.security.ssl.SSLSocketImpl.recvAlert(SSLSocketImpl.java:1959)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1077)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.
java:1312)
at sun.security.ssl.SSLSocketImpl.writeRecord(SSLSocketImpl.java:702)
at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:122)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82
)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at org.apache.commons.httpclient.HttpConnection.flushRequestOutputStream
(HttpConnection.java:827)
at org.apache.commons.httpclient.HttpMethodBase.writeRequest(HttpMethodB
ase.java:1975)
at org.apache.commons.httpclient.HttpMethodBase.execute(HttpMethodBase.j
ava:993)
at org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(Htt
pMethodDirector.java:397)
at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMe
thodDirector.java:170)
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.jav
a:396)
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.jav
a:324)
Here is the detailed log info.
Ignoring unavailable cipher suite: TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
Ignoring unavailable cipher suite: TLS_DHE_RSA_WITH_AES_256_CBC_SHA
Ignoring unavailable cipher suite: TLS_ECDH_RSA_WITH_AES_256_CBC_SHA
Ignoring unsupported cipher suite: TLS_DHE_DSS_WITH_AES_128_CBC_SHA256
Ignoring unsupported cipher suite: TLS_DHE_DSS_WITH_AES_256_CBC_SHA256
Ignoring unsupported cipher suite: TLS_DHE_RSA_WITH_AES_128_CBC_SHA256
Ignoring unsupported cipher suite: TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256
Ignoring unsupported cipher suite: TLS_DHE_RSA_WITH_AES_256_CBC_SHA256
Ignoring unsupported cipher suite: TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
Ignoring unsupported cipher suite: TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384
Ignoring unsupported cipher suite: TLS_RSA_WITH_AES_256_CBC_SHA256
Ignoring unavailable cipher suite: TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
Ignoring unsupported cipher suite: TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
Ignoring unsupported cipher suite: TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384
Ignoring unavailable cipher suite: TLS_DHE_DSS_WITH_AES_256_CBC_SHA
Ignoring unsupported cipher suite: TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384
Ignoring unsupported cipher suite: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
Ignoring unsupported cipher suite: TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256
Ignoring unavailable cipher suite: TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA
Ignoring unavailable cipher suite: TLS_RSA_WITH_AES_256_CBC_SHA
Ignoring unsupported cipher suite: TLS_RSA_WITH_AES_128_CBC_SHA256
Allow unsafe renegotiation: true
Allow legacy hello messages: true
Is initial handshake: true
Is secure renegotiation: false
main, setSoTimeout(30000) called
main, setSoTimeout(30000) called
%% No cached client session
*** ClientHello, TLSv1
RandomCookie: GMT: 1392263294 bytes = { 158, 254, 253, 221, 176, 200, 181, 30,
189, 167, 209, 227, 105, 106, 207, 196, 50, 6, 21, 179, 125, 69, 112, 158, 49, 2
34, 113, 10 }
Session ID: {}
Cipher Suites: [TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128
_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA, TLS
_ECDH_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WI
TH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_RC4_128
_SHA, SSL_RSA_WITH_RC4_128_SHA, TLS_ECDH_ECDSA_WITH_RC4_128_SHA, TLS_ECDH_RSA_WI
TH_RC4_128_SHA, TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_3DES_E
DE_CBC_SHA, SSL_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA,
TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_
DSS_WITH_3DES_EDE_CBC_SHA, SSL_RSA_WITH_RC4_128_MD5, TLS_EMPTY_RENEGOTIATION_INF
O_SCSV]
Compression Methods: { 0 }
Extension elliptic_curves, curve names: {secp256r1, sect163k1, sect163r2, secp19
2r1, secp224r1, sect233k1, sect233r1, sect283k1, sect283r1, secp384r1, sect409k1
, sect409r1, secp521r1, sect571k1, sect571r1, secp160k1, secp160r1, secp160r2, s
ect163r1, secp192k1, sect193r1, sect193r2, secp224k1, sect239k1, secp256k1}
Extension ec_point_formats, formats: [uncompressed]
Extension server_name, server_name: [host_name: messaging.xxxxx.com]
***
[write] MD5 and SHA1 hashes: len = 180
0000: 01 00 00 B0 03 01 53 FC 40 7E 9E FE FD DD B0 C8 ......S.#.......
0010: B5 1E BD A7 D1 E3 69 6A CF C4 32 06 15 B3 7D 45 ......ij..2....E
0020: 70 9E 31 EA 71 0A 00 00 2A C0 09 C0 13 00 2F C0 p.1.q...*...../.
0030: 04 C0 0E 00 33 00 32 C0 07 C0 11 00 05 C0 02 C0 ....3.2.........
0040: 0C C0 08 C0 12 00 0A C0 03 C0 0D 00 16 00 13 00 ................
0050: 04 00 FF 01 00 00 5D 00 0A 00 34 00 32 00 17 00 ......]...4.2...
0060: 01 00 03 00 13 00 15 00 06 00 07 00 09 00 0A 00 ................
0070: 18 00 0B 00 0C 00 19 00 0D 00 0E 00 0F 00 10 00 ................
0080: 11 00 02 00 12 00 04 00 05 00 14 00 08 00 16 00 ................
0090: 0B 00 02 01 00 00 00 00 1B 00 19 00 00 16 6D 65 ..............me
00A0: 73 73 61 67 69 6E 67 2E 63 6F 76 69 73 69 6E 74 ssaging.xxxxx
00B0: 2E 63 6F 6D .com
main, WRITE: TLSv1 Handshake, length = 180
[Raw write]: length = 185
0000: 16 03 01 00 B4 01 00 00 B0 03 01 53 FC 40 7E 9E ...........S.#..
0010: FE FD DD B0 C8 B5 1E BD A7 D1 E3 69 6A CF C4 32 ...........ij..2
0020: 06 15 B3 7D 45 70 9E 31 EA 71 0A 00 00 2A C0 09 ....Ep.1.q...*..
0030: C0 13 00 2F C0 04 C0 0E 00 33 00 32 C0 07 C0 11 .../.....3.2....
0040: 00 05 C0 02 C0 0C C0 08 C0 12 00 0A C0 03 C0 0D ................
0050: 00 16 00 13 00 04 00 FF 01 00 00 5D 00 0A 00 34 ...........]...4
0060: 00 32 00 17 00 01 00 03 00 13 00 15 00 06 00 07 .2..............
0070: 00 09 00 0A 00 18 00 0B 00 0C 00 19 00 0D 00 0E ................
0080: 00 0F 00 10 00 11 00 02 00 12 00 04 00 05 00 14 ................
0090: 00 08 00 16 00 0B 00 02 01 00 00 00 00 1B 00 19 ................
00A0: 00 00 16 6D 65 73 73 61 67 69 6E 67 2E 63 6F 76 ...messaging.xxx
00B0: 69 73 69 6E 74 2E 63 6F 6D xx.com
[Raw read]: length = 5
0000: 15 03 01 00 02 .....
[Raw read]: length = 2
0000: 02 0A ..
main, READ: TLSv1 Alert, length = 2
main, RECV TLSv1 ALERT: fatal, unexpected_message
main, called closeSocket()
main, handling exception: javax.net.ssl.SSLException: Received fatal alert: unex
pected_message
main, called close()
main, called closeInternal(true)
main, called close()
main, called closeInternal(true)
main, called close()
main, called closeInternal(true)
Solution to this problem is:
Disable ecliptic curves with command: -Dcom.sun.net.ssl.enableECC=false
Disable server name extension: -Djsse.enableSNIExtension=false

Resources