JMS Correlation ID is different in MQ - jms

I'm writing a java spring boot app that connects to MQ, read messages (placed by another app) and returns a response to them.
I'm trying to set the correlation id to the message ID we've just received.
public void sendMessage(String destination, Item item, Message originalMessage) throws IOException {
jmsTemplate.send(destinationName, session -> {
Message message = createBytesMessage(item, session);
message.setJMSCorrelationID(originalMessage.getJMSMessageID());
log.info("Message: " + message);
return message;
});
}
The original message ID is:
ID:414d5120514d44454c30315020202020f8af4d6202558722
The reply message has a correlation ID as expected:
ID:414d5120514d44454c30315020202020f8af4d6202558722
However, when I investigate MQ I can see the correlation ID is actually:
343134643531323035313464343434353463333033313530
Curiously, if I take my expected value for the correlation and covert that to a hexdecimal, I get the following:
343134643531323035313464343434353463333033313530323032303230323066386166346436323032383338343232
// The first 48 characters match what is in MQ
Here is the debug information from the java application:
This is the original message:
JMSMessage class: jms_bytes
JMSType: null
JMSDeliveryMode: 1
JMSDeliveryDelay: 0
JMSDeliveryTime: 0
JMSExpiration: 0
JMSPriority: 0
JMSMessageID: ID:414d5120514d44454c30315020202020f8af4d6202558722
JMSTimestamp: 1649452421210
JMSCorrelationID: null
JMSDestination: null
JMSRedelivered: false
JMSXAppID: DataFlowEngine
JMSXDeliveryCount: 1
JMSXUserID: mqm
JMS_IBM_Character_Set: UTF-8
JMS_IBM_Encoding: 546
JMS_IBM_Format:
JMS_IBM_MsgType: 1
JMS_IBM_PutApplType: 6
JMS_IBM_PutDate: 20220408
JMS_IBM_PutTime: 21134121
This is the debug for the out bound message:
2022-04-08 22:13:42.020 DEBUG 31180 --- [enerContainer-1] o.springframework.jms.core.JmsTemplate : Sending created message:
JMSMessage class: jms_bytes
JMSType: null
JMSDeliveryMode: 2
JMSDeliveryDelay: 0
JMSDeliveryTime: 0
JMSExpiration: 0
JMSPriority: 4
JMSMessageID: null
JMSTimestamp: 0
JMSCorrelationID: ID:414d5120514d44454c30315020202020f8af4d6202558722
JMSDestination: null
JMSReplyTo: null
JMSRedelivered: false
7b22747261636b696e674964223a312c22737461747573223a2244454c495645524544222c226465
6c697665727944617465223a22323032322d30312d3031222c2274797065223a225345434f4e445f
434c415353227d

Related

Symfony\Component\Process\Exception\ProcessSignaledException The process has been signaled with signal "11"

i am trying to convert html to pdf by using snappypdf laravel on CentOS Linux 8
Symfony\Component\Process\Exception\ProcessSignaledException^ {#863
-process: Symfony\Component\Process\Process^ {#861
-callback: null
-hasCallback: false
-commandline: "/usr/local/bin/wkhtmltopdf-amd64 --lowquality --margin-bottom '20' --margin-top '15' --orientation 'landscape' --page-size 'a3' --enable-javascript --footer-left '[page]' --footer-right '*Has Data Fault (True) = The data logger is transmitting data however with an issue such as sensor failure due to connectivity or damage.' '/tmp/knp_snappy619485d5cba579.63788877.html' '/tmp/knp_snappy619485d5cbae99.86089154.pdf'"
-cwd: "/var/www/html/ontotoreport"
-env: null
-input: null
-starttime: 1637123541.8356
-lastOutputTime: 1637123541.8983
-timeout: 60.0
-idleTimeout: null
-exitcode: 139
-fallbackStatus: []
-processInformation: array:8 [

Spring Boot Kafka consumer lags and reads wrong

i am using Spring Boot 2.2 together with a Kafka cluster (bitnami helm chart).
And get some pretty strange behaviour.
Having some spring boot app with several consumers on several topics.
Calling kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe -group my-app gives:
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
my-app event.topic-a 0 2365079 2365090 11 consumer-4-0c9a5616-3e96-413b-b770-b813c3d38a28 /10.244.3.47 consumer-4
my-app event.topic-a 1 2365080 2365091 11 consumer-4-0c9a5616-3e96-413b-b770-b813c3d38a28 /10.244.3.47 consumer-4
my-app batch.topic-a 0 278363 278363 0 consumer-3-14cb199e-646f-46ad-8ee2-98f37107fa37 /10.244.3.47 consumer-3
my-app batch.topic-a 1 278362 278362 0 consumer-3-14cb199e-646f-46ad-8ee2-98f37107fa37 /10.244.3.47 consumer-3
my-app batch.topic-b 0 1434 1434 0 consumer-5-a2f940c8-75e6-43d2-8d79-77d03e1ad640 /10.244.3.47 consumer-5
my-app event.topic-b 0 2530 2530 0 consumer-6-45a32d6d-eac9-4abe-b14f-47173338e62c /10.244.3.47 consumer-6
my-app batch.topic-c 0 1779 1779 0 consumer-1-d935a29f-ad3c-4292-9ace-5efdfff864d6 /10.244.3.47 consumer-1
my-app event.topic-c 0 12308 13502 1194 - - -
Calling it again gives
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
my-app event.topic-a 0 2365230 2365245 15 consumer-4-0c9a5616-3e96-413b-b770-b813c3d38a28 /10.244.3.47 consumer-4
my-app event.topic-a 1 2365231 2365246 15 consumer-4-0c9a5616-3e96-413b-b770-b813c3d38a28 /10.244.3.47 consumer-4
my-app batch.topic-a 0 278363 278363 0 consumer-3-14cb199e-646f-46ad-8ee2-98f37107fa37 /10.244.3.47 consumer-3
my-app batch.topic-a 1 278362 278362 0 consumer-3-14cb199e-646f-46ad-8ee2-98f37107fa37 /10.244.3.47 consumer-3
my-app batch.topic-b 0 1434 1434 0 consumer-5-a2f940c8-75e6-43d2-8d79-77d03e1ad640 /10.244.3.47 consumer-5
my-app event.topic-b 0 2530 2530 0 consumer-6-45a32d6d-eac9-4abe-b14f-47173338e62c /10.244.3.47 consumer-6
my-app batch.topic-c 0 1779 1779 0 consumer-1-d935a29f-ad3c-4292-9ace-5efdfff864d6 /10.244.3.47 consumer-1
my-app event.topic-c 0 12308 13505 1197 consumer-2-d52e2b96-f08c-4247-b827-4464a305cb20 /10.244.3.47 consumer-2
As you could see the the consumer for event.topic-c is now there but laggs 1197 entries.
The app itself reads from the topic, but always the same events (looks like the amount of the lag) but the offset is not changed.
I get no errors or log entries, eighter on kafka or on spring boot.
All i have is for that specific topic the same events are processed again and again ..... all the other topics on the app are working correctly.
Here is the client config:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [kafka:9092]
check.crcs = true
client.dns.lookup = default
client.id =
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = sap-integration
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
Any idea.. i am a litte bit lost ..
Edit:
Spring config is pretty standard:
configProps[ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG] = bootstrapAddress
configProps[ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG] = StringDeserializer::class.java
configProps[ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG] = MyJsonDeserializer::class.java
configProps[JsonDeserializer.TRUSTED_PACKAGES] = "*"
Here are some example from the logs:
2019-11-01 18:39:46.268 DEBUG 1 --- [ntainer#0-0-C-1] .a.RecordMessagingMessageListenerAdapter : Processing [GenericMessage [payload=..., headers={kafka_offset=37603361, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#6ca11277, kafka_timestampType=CREATE_TIME, kafka_receivedMessageKey=null, kafka_receivedPartitionId=0, kafka_receivedTopic=topic-c, kafka_receivedTimestamp=1572633584589, kafka_groupId=my-app}]]
2019-11-01 18:39:46.268 DEBUG 1 --- [ntainer#0-0-C-1] .a.RecordMessagingMessageListenerAdapter : Processing [GenericMessage [payload=..., headers={kafka_offset=37603362, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#6ca11277, kafka_timestampType=CREATE_TIME, kafka_receivedMessageKey=null, kafka_receivedPartitionId=0, kafka_receivedTopic=topic-c, kafka_receivedTimestamp=1572633584635, kafka_groupId=my-app}]]
2019-11-01 18:39:46.268 DEBUG 1 --- [ntainer#0-0-C-1] essageListenerContainer$ListenerConsumer : Commit list: {topic-c-0=OffsetAndMetadata{offset=37603363, leaderEpoch=null, metadata=''}}
2019-11-01 18:39:46.268 DEBUG 1 --- [ntainer#0-0-C-1] essageListenerContainer$ListenerConsumer : Committing: {topic-c-0=OffsetAndMetadata{offset=37603363, leaderEpoch=null, metadata=''}}
....
2019-11-01 18:39:51.475 DEBUG 1 --- [ntainer#0-0-C-1] essageListenerContainer$ListenerConsumer : Received: 0 records
2019-11-01 18:39:51.475 DEBUG 1 --- [ntainer#0-0-C-1] essageListenerContainer$ListenerConsumer : Commit list: {}
while consumer is laging
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
my-app topic-c 0 37603363 37720873 117510 consumer-3-2b8499c0-7304-4906-97f8-9c0f6088c469 /10.244.3.64 consumer-3
No error, no warning .. just no more messages ....
Thx
You need to look for logs like this...
2019-11-01 16:33:31.825 INFO 35182 --- [ kgh1231-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-2, groupId=kgh1231] (Re-)joining group
...
2019-11-01 16:33:31.872 INFO 35182 --- [ kgh1231-0-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [kgh1231-0, kgh1231-2, kgh1231-1, kgh1231-4, kgh1231-3]
...
2019-11-01 16:33:31.897 DEBUG 35182 --- [ kgh1231-0-C-1] essageListenerContainer$ListenerConsumer : Received: 10 records
...
2019-11-01 16:33:31.902 DEBUG 35182 --- [ kgh1231-0-C-1] .a.RecordMessagingMessageListenerAdapter : Processing [GenericMessage [payload=foo1, headers={kafka_offset=80, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#3d00c543, kafka_timestampType=CREATE_TIME, kafka_receivedMessageKey=null, kafka_receivedPartitionId=0, kafka_receivedTopic=kgh1231, kafka_receivedTimestamp=1572640411869}]]
...
2019-11-01 16:33:31.906 DEBUG 35182 --- [ kgh1231-0-C-1] .a.RecordMessagingMessageListenerAdapter : Processing [GenericMessage [payload=foo5, headers={kafka_offset=61, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#3d00c543, kafka_timestampType=CREATE_TIME, kafka_receivedMessageKey=null, kafka_receivedPartitionId=3, kafka_receivedTopic=kgh1231, kafka_receivedTimestamp=1572640411870}]]
2019-11-01 16:33:31.907 DEBUG 35182 --- [ kgh1231-0-C-1] essageListenerContainer$ListenerConsumer : Commit list: {kgh1231-0=OffsetAndMetadata{offset=82, metadata=''}, kgh1231-2=OffsetAndMetadata{offset=62, metadata=''}, kgh1231-1=OffsetAndMetadata{offset=62, metadata=''}, kgh1231-4=OffsetAndMetadata{offset=62, metadata=''}, kgh1231-3=OffsetAndMetadata{offset=62, metadata=''}}
2019-11-01 16:33:31.908 DEBUG 35182 --- [ kgh1231-0-C-1] essageListenerContainer$ListenerConsumer : Committing: {kgh1231-0=OffsetAndMetadata{offset=82, metadata=''}, kgh1231-2=OffsetAndMetadata{offset=62, metadata=''}, kgh1231-1=OffsetAndMetadata{offset=62, metadata=''}, kgh1231-4=OffsetAndMetadata{offset=62, metadata=''}, kgh1231-3=OffsetAndMetadata{offset=62, metadata=''}}
If you don't see anything like that then your consumer is not configured properly.
If you can't figure it out, post your log someplace like PasteBin.
Mysteriously fixed this issue by doing all of that:
Upgrade Kafka to newer Version
Upgrade Spring Boot to newer Version
Improve performance of the Software
Switch to Batch processing
Add Health Check and combine it with liveness probe
It runns now for more than a week without having that error again.

Access Interval from Carbon Interval

CarbonInterval {#1680 ▼
interval: + 00:07:36.0
#tzName: null
#localMonthsOverflow: null
#localYearsOverflow: null
#localStrictModeEnabled: null
#localHumanDiffOptions: null
#localToStringFormat: null
#localSerializer: null
#localMacros: null
#localGenericMacros: null
#localFormatFunction: null
#localTranslator: null
+"y": 0
+"m": 0
+"d": 0
+"h": 0
+"i": 0
+"s": 456
+"f": 0.0
+"weekday": 0
+"weekday_behavior": 0
+"first_last_day_of": 0
+"invert": 0
+"days": false
+"special_type": 0
+"special_amount": 0
+"have_weekday_relative": 0
+"have_special_relative": 0
}
How can I access this interval: + 00:07:36.0 value form this CarbonPeriod object.
CarbonInterval has a function called cascade() which will fill out the other fields accordingly.
Which will return carbonInterval object something like this.
+"y": 0
+"m": 0
+"d": 0
+"h": 0
+"i": 7
+"s": 36
+"f": 0.0
Then we can access those properties or we can call format('%H:%I:%s') to get the desired value.
Example
$carbonInterval->cascade()->format('%H:%I:%s')

Pig Error on SUM function

I have data like -
store trn_date dept_id sale_amt
1 2014-12-14 101 10007655
1 2014-12-14 101 10007654
1 2014-12-14 101 10007544
6 2014-12-14 104 100086544
8 2014-12-14 101 1000000
9 2014-12-14 106 1000000
I want to get the sum of sale_amt,for this I'm doing
First I load the data using:
table = LOAD 'table' USING org.apache.hcatalog.pig.HCatLoader();
Then grouping the data on store, tran_date, dept_id
grp_table = GROUP table BY (store, tran_date, dept_id);
Finally trying to get the SUM of sale_amt using
grp_gen = FOREACH grp_table GENERATE
FLATTEN(group) AS (store, tran_date, dept_id),
SUM(table.sale_amt) AS tota_sale_amt;
getting below Error -
================================================================================
Pig Stack Trace
---------------
ERROR 2103: Problem doing work on Longs
org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception while executing (Name: grouped_all: Local Rearrange[tuple]{tuple}(false) - scope-1317 Operator Key: scope-1317): org.apache.pig.backend.executionengine.ExecException: ERROR 2103: Problem doing work on Longs
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:289)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POLocalRearrange.getNextTuple(POLocalRearrange.java:263)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigCombiner$Combine.processOnePackageOutput(PigCombiner.java:183)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigCombiner$Combine.reduce(PigCombiner.java:161)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigCombiner$Combine.reduce(PigCombiner.java:51)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
at org.apache.hadoop.mapred.Task$NewCombinerRunner.combine(Task.java:1645)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1611)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1462)
at org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:700)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:770)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 2103: Problem doing work on Longs
at org.apache.pig.builtin.AlgebraicLongMathBase.doTupleWork(AlgebraicLongMathBase.java:84)
at org.apache.pig.builtin.AlgebraicLongMathBase$Intermediate.exec(AlgebraicLongMathBase.java:108)
at org.apache.pig.builtin.AlgebraicLongMathBase$Intermediate.exec(AlgebraicLongMathBase.java:102)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.expressionOperators.POUserFunc.getNext(POUserFunc.java:330)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.expressionOperators.POUserFunc.getNextTuple(POUserFunc.java:369)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.getNext(PhysicalOperator.java:333)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.processPlan(POForEach.java:378)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.getNextTuple(POForEach.java:298)
at org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:281)
Caused by: java.lang.ClassCastException: java.lang.String cannot be cast to java.lang.Number
at org.apache.pig.builtin.AlgebraicLongMathBase.doTupleWork(AlgebraicLongMathBase.java:77)
================================================================================
As I'm reading table using HCatalog Loader and in hive table data type is string so i have tried with casting as well in the script but still getting the same Error
I don't have HCatalog installed in my system, so tried with simple file, but the below approach and code will work for you.
1.SUM will work only with data types(int, long, float, double, bigdecimal, biginteger or bytearray cast as double). Its look like your sale_amt column is in string, so you need to typecast this column to (long or double) before using SUM function.
2.You should not use store as variable, bcoz it is reserved keyword in Pig, so you have to rename this variable to different name otherwise you will get an error. I renamed this variable as "stores".
Example:
table:
1 2014-12-14 101 10007655
1 2014-12-14 101 10007654
1 2014-12-14 101 10007544
6 2014-12-14 104 100086544
8 2014-12-14 101 1000000
9 2014-12-14 106 1000000
PigScript:
A = LOAD 'table' USING PigStorage() AS (store:chararray,trn_date:chararray,dept_id:chararray,sale_amt:chararray);
B = FOREACH A GENERATE $0 AS stores,trn_date,dept_id,(long)sale_amt; --Renamed the variable store to stores and typecasted the sale_amt to long.
C = GROUP B BY (stores,trn_date,dept_id);
D = FOREACH C GENERATE FLATTEN(group),SUM(B.sale_amt);
DUMP D;
Output:
(1,2014-12-14,101,30022853)
(6,2014-12-14,104,100086544)
(8,2014-12-14,101,1000000)
(9,2014-12-14,106,1000000)

Java: WebSocket server won't send data to client properly

Handshakes are done correctly and the server can decode the data coming from the client, but the client closes the connection when I try to send data to it.
I've been using http://websocket.org/echo.html as the client w. latest versions of Firefox & Chrome.
Here's the data frame I'm trying to send:
129 10000001
4 100
116 1110100
101 1100101
115 1110011
116 1110100
-------
fin:true
opcode:1
len:4
masked:false
masks:[0, 0, 0, 0]
payload:test
?♦test
http://tools.ietf.org/html/rfc6455#section-5
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-------+-+-------------+-------------------------------+
|F|R|R|R| opcode|M| Payload len | Extended payload length |
|I|S|S|S| (4) |A| (7) | (16/64) |
|N|V|V|V| |S| | (if payload len==126/127) |
| |1|2|3| |K| | |
+-+-+-+-+-------+-+-------------+ - - - - - - - - - - - - - - - +
| Extended payload length continued, if payload len == 127 |
+ - - - - - - - - - - - - - - - +-------------------------------+
| |Masking-key, if MASK set to 1 |
+-------------------------------+-------------------------------+
| Masking-key (continued) | Payload Data |
+-------------------------------- - - - - - - - - - - - - - - - +
: Payload Data continued ... :
+ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
| Payload Data continued ... |
+---------------------------------------------------------------+
*/
And the server side method responsible for sending data to the client:
public void sendData(Socket socket, byte[] dataBytes){
System.out.println(java.util.Arrays.toString(dataBytes));
//[-127, 4, 116, 101, 115, 116]
for(byte b:dataBytes) System.out.println(Integer.toString((int)0xff&b,2));
/*
10000001
100
1110100
1100101
1110011
1110100
*/
try{
InputStream data = new ByteArrayInputStream(dataBytes);
OutputStream out = socket.getOutputStream();
//tested with ByteArrayOutputStream and written data == dataBytes
//out.write((byte)0x00); //tried with and without this
if ( data != null )
{
// tried also out.write(dataBytes) intstead of this
byte[] buff = new byte[2048];
while (true)
{
int read = data.read( buff, 0, 2048 );
if (read <= 0)
break;
out.write( buff, 0, read );
}
}
//out.write(-1);
//out.write((byte)0xFF);
out.flush();
//out.close();
if ( data != null )
data.close();
}catch(Exception e){
e.printStackTrace();
sockets.remove(socket);
}
}
Some questions:
Do you wait for the connection to open fully before sending from the server?
Can you capture the stream using wireshark and see what's actually on the wire?
In Chrome's Javascript console do you see any WebSocket related errors?
In your onclose handler for the Javascript websocket object, can you console.log the values of code and reason from the event?
Like this:
ws.onclose = function (e) {
console.log("closed - code " + e.code + ", reason " + reason);
}
Your issue was (probably) using the old protocol. Use the newer one...
#see
http://web-sockets.org
Has working source for a client and server (in Java).

Resources