Go Channel not responding on load test - go

I am running some loadtesting on a go program that use some channels.
Sending message to channel using below code, available here
.
select {
case globals.hub.join <- msg:
default:
// Reply with a 500 to the user.
s.queueOut(ErrUnknownReply(msg, msg.Timestamp))
s.inflightReqs.Done()
logs.Err.Println("s.subscribe: hub.join queue full, topic ", msg.RcptTo, s.sid)
}
Channel implementation is as below, available here.
select {
case join := <-h.join:
logs.Info.Println("hub.go : hub case join")
...
...
}
In normal scenario, the code is working fine, but when loadtested/load is increased to the server, the channel does not receive mesages sent to it. The channel reaches to a not responsding state. Is there a possible explanation for this?
Any help is appreciated.
Full code of this available here .

Related

Gorilla Websocket: Error during WebSocket handshake: Unexpected response code: 404

I cloned the gorilla websocket chat example and updated it to use multiple rooms. However, I am getting the error:
Error during WebSocket handshake: Unexpected response code: 404
In chrome whenever I try to make the connection. My source code is available on github. It is very similar to the original example, and was only changed slightly. I don't know why it is not working.
EDIT:
The problem is occuring in this line of code:
for _, name := range []string{"arduino", "java", "go", "scala"} {
room := newRoom("go")
http.Handle("/chat/go", room)
go room.run()
}
Looping over a slice is causing an issue with the httphandle function. If instead, I declare them individually:
room := newRoom("go")
http.Handle("/chat/go", room)
go room.run()
...
It works. How can I fix this?
So actually from your index.html file, you connect to wrong url
<!-- index.html -->
<script>
var serviceLocation = "ws://0.0.0.0:8080/chat/";
.....
function connectToChatserver() {
room = $('#chatroom option:selected').val();
wsocket = new WebSocket(serviceLocation + room);
// it connect to /chat/<room>, it has slash after chat
This is your url from main.go
http.Handle("/chat"+name, room)
It will make url like this: http://localhost:8080/chatgo, rather than what you want: http://localhost:8080/chat/go
Fyi, it will error because you don't properly handle the channel, so after I send 1 message, it will be automatically closed. But this is another topic.
2020/08/04 06:42:10 running chat room java
2020/08/04 06:42:10 running chat room go
2020/08/04 06:42:10 running chat room arduino
2020/08/04 06:42:10 running chat room scala
2020/08/04 06:42:15 new client in room arduino
2020/08/04 06:42:15 client leaving room arduino
2020/08/04 06:42:15 client leaving room arduino
panic: close of closed channel
goroutine 6 [running]:
main.(*Room).run(0xc00007ac90)
/home/fahim/Projects/Golang/go-chat/room.go:70 +0x3b5
created by main.main
/home/fahim/Projects/Golang/go-chat/main.go:17 +0x2bd
exit status 2

CAPL Multiframe handling

I am writting a CAPL for Diagnostic request and response, I can get response if the data is up to 8 bytes, if data is multiframe I am not getting respone and the message on the trace is "Breaking connection between server and tester", how to handle this? I know about the CANTP frames but in this case it should handle by CAN/Canoe .
Please read CANoe ISO-TP protocol. In case of multiframe response, the tester has to send the flow control frame which provides synchronization between Sender and Receiver, which is usually 0x30. It also has fields for Block size of continous frames and seperation time. Try the below CAPL code.
variables
{
message 0x710 msg = { dlc=8,dir = rx };
byte check_byte0;
}
on message 0x718
{
check_byte0 = this.byte(0) & 0x30;
if(check_byte0 == 0x10)
{
msg.dword(0)=0x30;
msg.dword(4)=0x00;
output(msg2);
}
}
I was trying to send the request over a message ID in most gross form like 22 XX YY , which is a read DID request,this works well if the response is less than 8 bytes, if response is more than 8 bytes this wont work. so we need to use the Diagnostic objects for the request and response as defined in the CDD(or any description file) as used in the project.
If you are not using CDD, in such cases you need to use CCI (Capl call back interfaces), mostly that is necessary for simulation setups.

How do I determine that all actors have received a broadcast message

I have a single ActorA that reads from an input stream and sends messages to a group of ActorB's. When ActorA reaches the end of the input stream it cleans up its resources, broadcasts a Done message to the ActorB's, and shuts itself down.
I have approx 12 ActorB's that send messages to a group of ActorC's. When an ActorB receives a Done message from ActorA then it cleans up its resources and shuts itself down, with the exception of the last surviving ActorB which broadcasts a Done message to the ActorC's before it shuts itself down.
I have approx 24 ActorC's that send messages to a single ActorD. Similar to the ActorB's, when each ActorC gets a Done message it cleans up its resources and shuts itself down, with the exception of the last surviving ActorC which sends a Done message to ActorD.
When ActorD gets a Done message it cleans up its resources and shuts itself down.
Initially I had the ActorB's and ActorC's immediately propagate the Done message when they received it, but this might cause the ActorC's to shut down before all of the ActorB's have finished processing their queues; likewise the ActorD might shut down before the ActorC's have finished processing their queues.
My solution is to use an AtomicInteger that is shared among the ActorB's
class ActorB(private val actorCRouter: ActorRef,
private val actorCount: AtomicInteger) extends Actor {
private val init = {
actorCount.incrementAndGet()
()
}
def receive = {
case Done => {
if(actorCount.decrementAndGet() == 0) {
actorCRouter ! Broadcast(Done)
}
// clean up resources
context.stop(self)
}
}
}
ActorC uses similar code, with each ActorC sharing an AtomicInteger.
At present all actors are initialized in a web service method, with the downstream ActorRef's passed in the upstream actors' constructors.
Is there a preferred way to do this, e.g. using calls to Akka methods instead of an AtomicInteger?
Edit: I'm considering the following as a possible alternative: when an actor receives a Done message it sets the receive timeout to 5 seconds (the program will take over an hour to run, so delaying cleanup/shutdown by a few seconds won't impact the performance); when the actor gets a ReceiveTimeout it broadcasts Done to the downstream actors, cleans up, and shuts down. (The routers for ActorB and ActorC are using a SmallestMailboxRouter)
class ActorB(private val actorCRouter: ActorRef) extends Actor {
def receive = {
case Done => {
context.setReceiveTimeout(Duration.create(5, SECONDS))
}
case ReceiveTimeout => {
actorCRouter ! Broadcast(Done)
// clean up resources
context.stop(self)
}
}
}
Sharing actorCount among related actors is not good thing to do. Actor should only be using its own state to handle messages.
How about having ActorBCompletionHanlder actor for actor of type ActorB. All ActorB will have reference to ActorBCompletionHanlder actor. Every time ActorB receives Done message it can do necessay cleanup and simply pass done message to ActorBCompletionHanlder. ActorBCompletionHanlder will maintain state variale for maintaining counts. Everytime it receives done message it can simply update counter. As this is solely state variable for this actor no need to have it atomic and that way no need for any explicit locking. ActorBCompletionHanlder will send done message to ActorC once it receives last done message.
This way sharing of activeCount is not among actors but only managed by ActorBCompletionHanlder. Same thing can be repeated for other types.
A-> B's -> BCompletionHanlder -> C's -> CCompletionHandler -> D
Other approach could be to have one monitoring actor for evey related group of actors. And using watch api and child terminated event on monitor you can chose to decide what to do once you receive last done message.
val child = context.actorOf(Props[ChildActor])
context.watch(child)
case Terminated(child) => {
log.info(child + " Child actor terminated")
}

routing files with zeromq (jeromq)

I'm trying to implement a "file dispatcher" on zmq (actually jeromq, I'd rather avoid jni).
What I need is to load balance incoming files to processors:
each file is handled only by one processor
files are potentially large so I need to manage the file transfer
Ideally I would like something like https://github.com/zeromq/filemq but
with a push/pull behaviour rather than publish/subscribe
being able to handle the received file rather than writing it to disk
My idea is to use a mix of taskvent/tasksink and asyncsrv samples.
Client side:
one PULL socket to be notified of a file to be processed
one DEALER socket to handle the (async) file transfer chunk by chunk
Server side:
one PUSH socket to dispatch incoming file (names)
one ROUTER socket to handle file requests
a few DEALER workers managing the file transfers for clients and connected to the router via an inproc proxy
My first question is: does this seem like the right way to go? Anything simpler maybe?
My second question is: my current implem gets stuck on sending out the actual file data.
clients are notified by the server, and issue a request.
the server worker gets the request, and writes the response back to the inproc queue but the response never seems to go out of the server (can't see it in wireshark) and the client is stuck on the poller.poll awaiting the response.
It's not a matter of sockets being full and dropping data, I'm starting with very small files sent in one go.
Any insight?
Thanks!
==================
Following raffian's advice I simplified my code, removing the push/pull extra socket (it does make sense now that you say it)
I'm left with the "non working" socket!
Here's my current code. It has many flaws that are out of scope for now (client ID, next chunk etc..)
For now, I'm just trying to have both guys talking to each other roughly in that sequence
Server
object FileDispatcher extends App
{
val context = ZMQ.context(1)
// server is the frontend that pushes filenames to clients and receives requests
val server = context.socket(ZMQ.ROUTER)
server.bind("tcp://*:5565")
// backend handles clients requests
val backend = context.socket(ZMQ.DEALER)
backend.bind("inproc://backend")
// files to dispatch given in arguments
args.toList.foreach { filepath =>
println(s"publish $filepath")
server.send("newfile".getBytes(), ZMQ.SNDMORE)
server.send(filepath.getBytes(), 0)
}
// multithreaded server: router hands out requests to DEALER workers via a inproc queue
val NB_WORKERS = 1
val workers = List.fill(NB_WORKERS)(new Thread(new ServerWorker(context)))
workers foreach (_.start)
ZMQ.proxy(server, backend, null)
}
class ServerWorker(ctx: ZMQ.Context) extends Runnable
{
override def run()
{
val worker = ctx.socket(ZMQ.DEALER)
worker.connect("inproc://backend")
while (true)
{
val zmsg = ZMsg.recvMsg(worker)
zmsg.pop // drop inner queue envelope (?)
val cmd = zmsg.pop //cmd is used to continue/stop
cmd.toString match {
case "get" =>
val file = zmsg.pop.toString
println(s"clientReq: cmd: $cmd , file:$file")
//1- brute force: ignore cmd and send full file in one go!
worker.send("eof".getBytes, ZMQ.SNDMORE) //header indicates this is the last chunk
val bytes = io.Source.fromFile(file).mkString("").getBytes //dirty read, for testing only!
worker.send(bytes, 0)
println(s"${bytes.size} bytes sent for $file: "+new String(bytes))
case x => println("cmd "+x+" not implemented!")
}
}
}
}
client
object FileHandler extends App
{
val context = ZMQ.context(1)
// client is notified of new files then fetches file from server
val client = context.socket(ZMQ.DEALER)
client.connect("tcp://*:5565")
val poller = new ZMQ.Poller(1) //"poll" responses
poller.register(client, ZMQ.Poller.POLLIN)
while (true)
{
poller.poll
val zmsg = ZMsg.recvMsg(client)
val cmd = zmsg.pop
val data = zmsg.pop
// header is the command/action
cmd.toString match {
case "newfile" => startDownload(data.toString)// message content is the filename to fetch
case "chunk" => gotChunk(data.toString, zmsg.pop.getData) //filename, chunk
case "eof" => endDownload(data.toString, zmsg.pop.getData) //filename, last chunk
}
}
def startDownload(filename: String)
{
println("got notification: start download for "+filename)
client.send("get".getBytes, ZMQ.SNDMORE) //command header
client.send(filename.getBytes, 0)
}
def gotChunk(filename: String, bytes: Array[Byte])
{
println("got chunk for "+filename+": "+new String(bytes)) //callback the user here
client.send("next".getBytes, ZMQ.SNDMORE)
client.send(filename.getBytes, 0)
}
def endDownload(filename: String, bytes: Array[Byte])
{
println("got eof for "+filename+": "+new String(bytes)) //callback the user here
}
}
On the client, you don't need PULL with DEALER.
DEALER is PUSH and PULL combined, so use DEALER only, your code will be simpler.
Same goes for the server, unless you're doing something special, you don't need PUSH with ROUTER, router is bidirectional.
the server worker gets the request, and writes the response back to
the inproc queue but the response never seems to go out of the server
(can't see it in wireshark) and the client is stuck on the poller.poll
awaiting the response.
Code Problems
In the server, you're dispatching files with args.toList.foreach before starting the proxy, this is probably why nothing is leaving the server. Start the proxy first, then use it; Also, once you call ZMQProxy(..), the code blocks indefinitely, so you'll need a separate thread to send the filepaths.
The client may have an issue with the poller. The typical pattern for polling is:
ZMQ.Poller items = new ZMQ.Poller (1);
items.register(receiver, ZMQ.Poller.POLLIN);
while (true) {
items.poll(TIMEOUT);
if (items.pollin(0)) {
message = receiver.recv(0);
In the above code, 1) poll until timeout, 2) then check for messages, and if available, 3) get with receiver.recv(0). But in your code, you poll then drop into recv() without checking. You need to check if the poller has messages for that polled socket before calling recv(), otherwise, the receiver will hang if there's no messages.

hornerq JMSbridge acknowledge

I learning Hornetq code recently, and have a doubt about JMSbridge.
you can see, there has a fun named "sendMessages()" in the JMSbridgeImpl.java. the fun send msg to the remote JMSServer, but without doing acknowledge.
but int the fun named "sendBatchNonTransacted()" , there just dong acknowledge with the last msg, such as "messages.getLast().acknowledge();"
so the question is: why not do ack by the each msg in the fun named "sendMessages()"?
apologize, my English is pool.
I'm online waiting for you help! thank you !
--------------------------------------------
oh, thanks "Moj Far" very much for the frist question, i got it.
but i have a other question: i modifid the hornetQ source codes,
i want to use ClientConsumer(have successful init) to get msg in the local HornetQ
and use JMS producer to send msg to the remote JMSServer.
In the "Run" function of the "SourceReceiver" class, i modyfy as this:
if (bridgeType == JMSBridgeImpl.ALL_NETTY_MODE ) {
msg = sourceConsumer.receive(1000);
} else { /* core client receive msg */
cmsg = localClientConsumer.receive(1000);
if (cmsg != null) {
hq_msg = HornetQMessage.createMessage(cmsg, localClientSession);
//hq_msg = HornetQMessage.createMessage(cmsg, null);
hq_msg.doBeforeReceive();
//cmsg.acknowledge(); do ack after send
msg = hq_msg;
}
}
but after 2 hours runing, the VM memary is Overflow.
i alsotry to not use localClientSession to create HornetQMessage,but the memory is also Overflow.
is there something wrong in my code?
We have two ways for guarantee of sending messages:
Transactional (used in sendBatchLocalTx() and sendBatchXA())
Non Transactional or Acknowledgement (used in sendBatchNonTransacted())
All of sendBatchLocalTx(), sedBatchXA(), and sendBatchNonTransacted(), use sendMessages() for sending messages, then guarantee sending messages with their own way (commit a transaction or ack messages)!
In sendBatchNonTransacted(), it uses Client Acknowledgement Mode; In this mode, with acking a message (here, the last message), all sent messages in current session will be acked!

Resources