GATLING - Subscribe checks for multiple messages real-time WebSocket - websocket

I'am using Galting 3.2 and in the web socket stress test I use the subcribe method to test the correct read real -time notifications (stress test for In memory broker):
The part of code is:
val answerEventCheck = ws.checkTextMessage("<<RECEIVER Notification received")
.check(regex("^a.MESSAGE.testing."))
....
.exec(ws("<<RECEIVER - SUBSCRIBE")
.sendText("["SUBSCRIBE\n"+"X-Authorization"+":"+"${X-Authorization}"+"\nid:sub-0\ndestination:/topic/messages\n\n\u0000"]")
.await(seconds seconds)(answerEventCheck)
...
This 'subscribe execution' exits (unless in a cycle) when the websocket checks the correct single message (regex pattern). How can I count or read exactly the N notifications arrived for the user that I had opened the socket (with the same regex)?
Before the version 3.2 there was a websocket listener but for this version it's not present. It is possible to count the N notifications received.
Any suggest?
Regards.

Related

Spring Cloud Stream Listener not pausing / waiting for the messages in Integration Testing Code

I am having a Application which connects to RabbitMQ through Spring Cloud Stream, which works prefectly.
For Integration test cases i am trying to use the sample - https://github.com/piomin/sample-message-driven-microservices/blob/master/account-service/src/test/java/pl/piomin/services/account/OrderReceiverTest.java
However, in my case my application sends back 3 messages in some time Interval. So if i put the below Lines, it fetches the messages, but if the there is a delay in getting the messages.
int i = 1;
while (i > 0) {
Message<String> received = (Message<String>) collector.forChannel(channels.statusMessage()).poll();
if (received != null) {
LOGGER.info("Order response received: {}", received.getPayload());
}
}
So Instead of my custom polling, is there any way i can wait and Poll for my messages, and stop when i get those ?
I want to get the pick Messages based on the Response Routing Key to different Channels. Is it possible ?
--> Example: If the routingKey is "InProcess" , it should go to Inprocess Method.
1) Your question is not at all clear, expand on it and explain exactly what you mean.
2) Routing keys are used within Rabbit to route to different queues, they are not used within the framework to route to channels or methods.
You can, however, use a condition on the #StreamListener (match on the headers['amqp_receivedRoutingKey]`), but it's better to route messages to different queues instead.

Speeding up Message Hub Kafka Java Console sample

I've been working with the Message Hub sample code found at this link: https://github.com/ibm-messaging/message-hub-samples
In particular, I've been trying to increase the throughput of the producer with the Kafka Java console example. I noticed the documentation in this snippet of code:
// Synchronously wait for a response from Message Hub / Kafka on every message produced.
// For high throughput the future should be handled asynchronously.
RecordMetadata recordMetadata = future.get(5000, TimeUnit.MILLISECONDS);
producedMessages++;
I've already turned off the thread sleep found later in the code which also helped increase the throughput, but I was hoping I could get some help on implementing the future asynchronously in this block. Thanks in advance!
you have two basic options for handling the outcome of a produce request asynchronously
1) use the overloaded send with a completion callback argument, which will be invoked asynchronously:
public Future<RecordMetadata> send(ProducerRecord<K, V> record, Callback callback);
if using the callback you may ignore the future.
2) pass the Future to some other thread you have created, and have it inspect the future for completion, while leaving the thread that calls send free to carry on.

APNs error handling in ruby

I want to send notifications to apple devices in batches (1.000 device tokens in batch for example). Ant it seems that I can't know for sure that message was delivered to APNs.
Here is the code sample:
ssl_connection(bundle_id) do |ssl, socket|
device_tokens.each do |device_token|
ssl.write(apn_message_for device_token)
# I can check if there is an error response from APNs
response_has_an_error = IO.select([socket],nil,nil,0) != nil
# ...
end
end
The main problem is if network is down after the ssl_connection is established
ssl.write(...)
will never raise an error. Is there any way to ckeck that connection still works?
The second problem is in delay between ssl.write and ready error answer from APNs. I can pass timeout parameter to IO.select after last messege was sent. Maybe It's OK to wait for a few seconds for 1.000 batch, but wat if I have to send 1.000 messages for differend bundle_ids?
At https://zeropush.com, we use a gem named grocer to handle our communication with Apple and we had a similar problem. The solution we found was to use the socket's read_non_block method before each write to check for incoming data on the socket which would indicate an error.
It makes the logic a bit funny because read_non_block throws IO::WaitReadable if there is no data to read. So we call read_non_block and catch IO::WaitReadable before continuing as normal. In our case, catching the exception is the happy case. You may be able to use a similar approach rather than using IO.select(...).
One issue to be aware of is that Apple may not respond immediately and any notifications sent between a failing notification and reading from the socket will be lost.
You can see the code we are using in production at https://github.com/SymmetricInfinity/grocer/blob/master/lib/grocer/connection.rb#L30.

Publisher finishes before subscriber and messages are lost - why?

Fairly new to zeromq and trying to get a basic pub/sub to work. When I run the following (sub starting before pub) the publisher finishes but the subscriber hangs having not received all the messages - why ?
I think the socket is being closed but the messages have been sent ? Is there a way of ensuring all messages are received ?
Publisher:
import zmq
import random
import time
import tnetstring
context=zmq.Context()
socket=context.socket(zmq.PUB)
socket.bind("tcp://*:5556")
y=0
for x in xrange(5000):
st = random.randrange(1,10)
data = []
data.append(random.randrange(1,100000))
data.append(int(time.time()))
data.append(random.uniform(1.0,10.0))
s = tnetstring.dumps(data)
print 'Sending ...%d %s' % (st,s)
socket.send("%d %s" % (st,s))
print "Messages sent: %d" % x
y+=1
print '*** SERVER FINISHED. # MESSAGES SENT = ' + str(y)
Subscriber :-
import sys
import zmq
import tnetstring
# Socket to talk to server
context = zmq.Context()
socket = context.socket(zmq.SUB)
socket.connect("tcp://localhost:5556")
filter = "" # get all messages
socket.setsockopt(zmq.SUBSCRIBE, filter)
x=0
while True:
topic,data = socket.recv().split()
print "Topic: %s, Data = %s. Total # Messages = %d" % (topic,data,x)
x+=1
In ZeroMQ, clients and servers always try to reconnect; they won't go down if the other side disconnects (because in many cases you'd want them to resume talking if the other side comes up again). So in your test code, the client will just wait until the server starts sending messages again, unless you stop recv()ing messages at some point.
In your specific instance, you may want to investigate using the socket.close() and context.term(). It will block until all the messages have been sent. You also have the problem of a slow joiner. You can add a sleep after the bind, but before you start publishing. This works in a test case, but you will want to really understand what is the solution vs a band-aid.
You need to think of the PUB/SUB pattern like a radio. The sender and receiver are both asynchronous. The Publisher will continue to send even if no one is listening. The subscriber will only receive data if it is listening. If the network goes down in the middle, the data will be lost.
You need to understand this in order to design your messages. For example, if you design your messages to be "idempotent", it doesn't matter if you lose data. An example of this would be a status type message. It doesn't matter if you have any of the previous statuses. The latest one is correct and message loss doesn't matter. The benefits to this approach is that you end up with a more robust and performant system. The downsides are when you can't design your messages this way.
Your example includes a type of message that requires no loss. Another type of message would be transactional. For example, if you just sent the deltas of what changed in your system, you would not be able to lose the messages. Database replication is often managed this way which is why db replication is often so fragile. To try to provide guarantees, you need to do a couple things. One thing is to add a persistent cache. Each message sent needs to be logged in the persistent cache. Each message needs to be assigned a unique id (preferably a sequence) so that the clients can determine if they are missing a message. A second socket (ROUTER/REQ) needs to be added for the client to request the missing messages individually. Alternatively, you could just use the secondary socket to request resending over the PUB/SUB. The clients would then all receive the messages again (which works for the multicast version). The clients would ignore the messages they had already seen. NOTE: this follows the MAJORDOMO pattern found in the ZeroMQ guide.
An alternative approach is to create your own broker using the ROUTER/DEALER sockets. When the ROUTER socket saw each DEALER connect, it would store its ID. When the ROUTER needed to send data, it would iterate over all client IDs and publish the message. Each message should contain a sequence so that the client can know what missing messages to request. NOTE: this is a sort of reimplementation of Kafka from linkedin.

Block TCP-send till ACK returned

I am programming a client application sending TCP/IP packets to a server. Because of timeout issues I want to start a timer as soon as the ACK-Package is returned (so there can be no timeout while the package has not reached the server). I want to use the winapi.
Setting the Socket to blocking mode doesn't help, because the send command returns as soon as the data is written into the buffer (if I am not mistaken). Is there a way to block send till the ACK was returned, or is there any other way to do this without writing my own TCP-implementation?
Regards
It sounds like you want to do the minimum implementation to achieve your goal. In this case you should set your socket to blocking, and following the send which blocks until all data is sent, you call recv which in turn will block until the ACK packet is received or the server end closes or aborts the connection.
If you wanted to go further with your implementation you'd have to structure your client application in such a way that supports asynchronous communication. There are a few techniques with varying degrees of complexity; polling using select() simple, event model using WSASelectEvent/WSAWaitForMultipleEvents challenging, and the IOCompletionPort model which is very complicated.
peudocode... Will wait until ack is recevied, after which time you can call whatever functionallity you want -i chose some made up function send_data.. which would then send information over the socket after receiving the ack.
data = ''
while True
readable, writable, errors = select([socket])
if socket in readble
data += recv(socket)
if is_ack(data)
timer.start() #not sure why you want this
break
send_data(socket)

Resources