Lync 2013 - consuming 180 ringing responses from a forked request - windows

Is it possible to configure Lync 2013 to only send a single 180/183 ringing back upstream after an INVITE to Lync triggers multiple INVITEs to Lync subscriber endpoints that each end up generating a 180/183 message.
In case of simultaneous ring, I want Lync to consume all these 180s to avoid unnecessary messaging back to the originator INVITE'ing Lync that is behind a SBC.
It seems to be acting as a forking proxy rather than b2bua.

You're right by saying Lync forks calls. If a user has multiple endpoints, Lync will fork the call to each endpoint and in return each endpoint will return the ringing response.
You can create an MSPL script to catch 180 responses. Since MSPL is stateless, it would require a backing application (a ServerApplication) that checks if a 180 response is already sent for the current call, and block subsequent ringing responses. Based on the assumption that for all requests the CallID header will be identical, you can then decide which responses to send and which not.
A simple MSPL would be something like:
<lc:applicationManifest
lc:appUri="http://www.contoso.com/DefaultRoutingScript"
xmlns:lc="http://schemas.microsoft.com/lcs/2006/05">
<lc:responseFilter reasonCodes="1XX" />
<lc:proxyByDefault action="true" />
<lc:splScript><![CDATA[
if (sipResponse && sipResponse.StatusCode == 180)
{
Dispatch("OnResponse");
}
]]></lc:splScript>
</lc:applicationManifest>
Then in your server application you handle the OnResponse event, I imagine something like this:
public void OnResponse(object sender, ResponseReceivedEventArgs e)
{
if (e.Response.StatusCode == 180)
{
var callIdHeader = e.Response.AllHeaders.FindFirst(Header.StandardHeaderType.CallID);
if (callIdHeader != null)
{
var callId = callIdHeader.Value;
if (ShouldSendRingingResponse(callId))
{
e.ClientTransaction.ServerTransaction.SendResponse(e.Response);
}
}
}
}
public bool ShouldSendRingingResponse(string callId) { .... }
Then you can create some logic in the ShouldSendRingingResponse function to see whether to send the 180 response or not.
Note that I did not test this, it's just a basic outline of how I would attempt to handle the situation.

There isn't a way to prevent this in Lync; however, typically an AudioCodes SBC will be deployed too that contains an option to handle this scenario.
Multiple 18x:
The device supports the interworking of different support for multiple18x responses (including 180 Ringing, 181 Call is Being Forwarded, 182 Call Queued,and 183 Session Progress) that are forwarded to the caller. The UA can be configured as supporting only receipt of the first 18x response (i.e., the device forwards only this response to the caller), or receipt of multiple 18x responses (default). This is configured by the IP Profile parameter, 'SBC Remote Multiple 18x Support

Related

Pushing data to websocket browser client in Lua

I want to use a NodeMCU device (Lua based top level) to act as a websocket server to 1 or more browser clients.
Luckily, there is code to do this here: NodeMCU Websocket Server
(courtesy of #creationix and/or #moononournation)
This works as described and I am able to send a message from the client to the NodeMCU server, which then responds based on the received message. Great.
My questions are:
How can I send messages to the client without it having to be sent as a response to a client request (standalone sending of data)? When I try to call socket.send() socket is not found as a variable, which I understand, but cannot work out how to do it! :(
Why does the decode() function output the extra variable? What is this for? I'm assuming it will be for packet overflow, but I can never seem to make it return anything, regardless of my message length.
In the listen method, why has the author added a queuing system? is this essential or for applications that perhaps may receive multiple simultaneous messages? Ideally, I'd like to remove it.
I have simplified the code as below:
(excluding the decode() and encode() functions - please see the link above for the full script)
net.createServer(net.TCP):listen(80, function(conn)
local buffer = false
local socket = {}
local queue = {}
local waiting = false
local function onSend()
if queue[1] then
local data = table.remove(queue, 1)
return conn:send(data, onSend)
end
waiting = false
end
function socket.send(...)
local data = encode(...)
if not waiting then
waiting = true
conn:send(data, onSend)
else
queue[#queue + 1] = data
end
end
conn:on("receive", function(_, chunk)
if buffer then
buffer = buffer .. chunk
while true do
local extra, payload, opcode = decode(buffer)
if opcode==8 then
print("Websocket client disconnected")
end
--print(type(extra), payload, opcode)
if not extra then return end
buffer = extra
socket.onmessage(payload, opcode)
end
end
local _, e, method = string.find(chunk, "([A-Z]+) /[^\r]* HTTP/%d%.%d\r\n")
local key, name, value
for name, value in string.gmatch(chunk, "([^ ]+): *([^\r]+)\r\n") do
if string.lower(name) == "sec-websocket-key" then
key = value
break
end
end
if method == "GET" and key then
acceptkey=crypto.toBase64(crypto.hash("sha1", key.."258EAFA5-E914-47DA-95CA-C5AB0DC85B11"))
conn:send(
"HTTP/1.1 101 Switching Protocols\r\n"..
"Upgrade: websocket\r\nConnection: Upgrade\r\n"..
"Sec-WebSocket-Accept: "..acceptkey.."\r\n\r\n",
function ()
print("New websocket client connected")
function socket.onmessage(payload,opcode)
socket.send("GOT YOUR DATA", 1)
print("PAYLOAD = "..payload)
--print("OPCODE = "..opcode)
end
end)
buffer = ""
else
conn:send(
"HTTP/1.1 200 OK\r\nContent-Type: text/plain\r\nContent-Length: 12\r\n\r\nHello World!",
conn.close)
end
end)
end)
I can only answer 1 question, the others may be better suited for the library author. Besides, SO is a format where you ask 1 question normally.
How can I send messages to the client without it having to be sent as a response to a client request (standalone sending of data)?
You can't. Without the client contacting the server first and establishing a socket connection the server wouldn't know where to send the messages to. Even with SSE (server-sent events) it's the client that first initiates a connection to the server.

CAPL Multiframe handling

I am writting a CAPL for Diagnostic request and response, I can get response if the data is up to 8 bytes, if data is multiframe I am not getting respone and the message on the trace is "Breaking connection between server and tester", how to handle this? I know about the CANTP frames but in this case it should handle by CAN/Canoe .
Please read CANoe ISO-TP protocol. In case of multiframe response, the tester has to send the flow control frame which provides synchronization between Sender and Receiver, which is usually 0x30. It also has fields for Block size of continous frames and seperation time. Try the below CAPL code.
variables
{
message 0x710 msg = { dlc=8,dir = rx };
byte check_byte0;
}
on message 0x718
{
check_byte0 = this.byte(0) & 0x30;
if(check_byte0 == 0x10)
{
msg.dword(0)=0x30;
msg.dword(4)=0x00;
output(msg2);
}
}
I was trying to send the request over a message ID in most gross form like 22 XX YY , which is a read DID request,this works well if the response is less than 8 bytes, if response is more than 8 bytes this wont work. so we need to use the Diagnostic objects for the request and response as defined in the CDD(or any description file) as used in the project.
If you are not using CDD, in such cases you need to use CCI (Capl call back interfaces), mostly that is necessary for simulation setups.

http_listener cpprestsdk how to handle multiple POST requests

I have developed a client server application with casablanca cpprestskd.
Every 5 minutes a client send informations from his task manager (processes,cpu usage etc) to server via POST method.
The project should be able to manage about 100 clients.
Every time that server receives a POST request he opens an output file stream ("uploaded.txt") ,extract some initial infos from client (login,password),manage this infos, save all infos in a file with the same name of client (for example: client1.txt, client2.txt) in append mode and finally reply to client with a status code.
This is basically my POST handle code from server side:
void Server::handle_post(http_request request)
{
auto fileBuffer =
std::make_shared<Concurrency::streams::basic_ostream<uint8_t>>();
try
{
auto stream = concurrency::streams::fstream::open_ostream(
U("uploaded.txt"),
std::ios_base::out | std::ios_base::binary).then([request, fileBuffer](pplx::task<Concurrency::streams::basic_ostream<unsigned char>> Previous_task)
{
*fileBuffer = Previous_task.get();
try
{
request.body().read_to_end(fileBuffer->streambuf()).get();
}
catch (const exception&)
{
wcout << L"<exception>" << std::endl;
//return pplx::task_from_result();
}
//Previous_task.get().close();
}).then([=](pplx::task<void> Previous_task)
{
fileBuffer->close();
//Previous_task.get();
}).then([](task<void> previousTask)
{
// This continuation is run because it is value-based.
try
{
// The call to task::get rethrows the exception.
previousTask.get();
}
catch (const exception& e)
{
wcout << e.what() << endl;
}
});
//stream.get().close();
}
catch (const exception& e)
{
wcout << e.what() << endl;
}
ManageClient();
request.reply(status_codes::OK, U("Hello, World!")).then([](pplx::task<void> t) { handle_error(t); });
return;
}
Basically it works but if i try to send info from due clients at the same time sometimes it works sometimes it doen't work.
Obviously the problem if when i open "uploaded.txt" stream file.
Questions:
1)Is CASABLANCA http_listener real multitasking?how many task it's able to handle?
2)I didn't found in documentation ax example similar to mine,the only one who is approaching to mine is "Casalence120" Project but he uses Concurrency::Reader_writer_lock class (it seems a mutex method).
What can i do in order to manage multiple POST?
3)Is it possible to read some client infos before starting to open uploaded.txt?
I could open an output file stream directly with the name of the client.
4)If i lock access via mutex on uploaded.txt file, Server become sequential and i think this is not a good way to use cpprestsdk.
I'm still approaching cpprestskd so any suggestions would be helpful.
Yes, the REST sdk processes every request on a different thread
I confirm there are not many examples using the listener.
The official sample using the listener can be found here:
https://github.com/Microsoft/cpprestsdk/blob/master/Release/samples/CasaLens/casalens.cpp
I see you are working with VS. I would strongly suggest to move to VC++2015 or better VC++2017 because the most recent compiler supports co-routines.
Using co_await dramatically simplify the readability of the code.
Substantially every time you 'co_await' a function, the compiler will refactor the code in a "continuation" avoiding the penalty to freeze the threads executing the function itself. This way, you get rid of the ".then" statements.
The file problem is a different story than the REST sdk. Accessing the file system concurrently is something that you should test in a separate project. You can probably cache the first read and share the content with the other threads instead of accessing the disk every time.

Vertx SocketJS disconnects after few seconds of server being busy

Need some help understanding on where disconnects occur (SocketJS, Vertx) and how timeouts can be configured.
I am creating SockJSServer along with creating eventBus bridge. Problem that I observer is frequent WebSocket connection disconnects. Looking at the websocket frames, I see pings every 5 seconds and heart-beats what I configured every 1/2 sec(what seem to take effect). However, once heart beats are being delayed for longer then 5 second disconnects comes with message c[3000,'Go away']. As observed it happens when server is busy(doing something else on separate thread).
I have searched Vertx documentation and looked over vertx code and found few configuration parameter(which appear to be different across versions and documentation).
.putNumber("ping_interval", 120000)
.putNumber("session_timeout", 1200000)
.putNumber("heartbeat_period",500)
To be absolutely sure, I have tried different config that did not appear to have any impact. At this point, I think I have reached dead wall and need some help.
Vertx version 2.1P3
Server snipet
final SockJSServer server = vertx.createSockJSServer(httpServer);
server.bridge(new JsonObject().putString("prefix", "/eventbus")
.putNumber("ping_interval", 120000)
.putNumber("session_timeout", 1200000)
.putNumber("heartbeat_period",500),
new JsonArray().addObject(new JsonObject()),
new JsonArray().addObject(new JsonObject()));
Client code:
var eventBus = new EventBus('//hostX:12001/eventbus');
When you receive a SOCKET_IDLE event, you can't complete the event with a "true" parameter, as it indicates the socket must be closed:
SockJSHandler.create(vertx,handlerOptions).bridge(options, event -> {
boolean result = true;
switch(event.type()) {
case SOCKET_CREATED:
LOGGER.info("Socket created");
break;
case SOCKET_IDLE:
result = false;
return;
case SOCKET_CLOSED:
LOGGER.info("Socket closed");
break;
}
event.complete(result);
});

routing files with zeromq (jeromq)

I'm trying to implement a "file dispatcher" on zmq (actually jeromq, I'd rather avoid jni).
What I need is to load balance incoming files to processors:
each file is handled only by one processor
files are potentially large so I need to manage the file transfer
Ideally I would like something like https://github.com/zeromq/filemq but
with a push/pull behaviour rather than publish/subscribe
being able to handle the received file rather than writing it to disk
My idea is to use a mix of taskvent/tasksink and asyncsrv samples.
Client side:
one PULL socket to be notified of a file to be processed
one DEALER socket to handle the (async) file transfer chunk by chunk
Server side:
one PUSH socket to dispatch incoming file (names)
one ROUTER socket to handle file requests
a few DEALER workers managing the file transfers for clients and connected to the router via an inproc proxy
My first question is: does this seem like the right way to go? Anything simpler maybe?
My second question is: my current implem gets stuck on sending out the actual file data.
clients are notified by the server, and issue a request.
the server worker gets the request, and writes the response back to the inproc queue but the response never seems to go out of the server (can't see it in wireshark) and the client is stuck on the poller.poll awaiting the response.
It's not a matter of sockets being full and dropping data, I'm starting with very small files sent in one go.
Any insight?
Thanks!
==================
Following raffian's advice I simplified my code, removing the push/pull extra socket (it does make sense now that you say it)
I'm left with the "non working" socket!
Here's my current code. It has many flaws that are out of scope for now (client ID, next chunk etc..)
For now, I'm just trying to have both guys talking to each other roughly in that sequence
Server
object FileDispatcher extends App
{
val context = ZMQ.context(1)
// server is the frontend that pushes filenames to clients and receives requests
val server = context.socket(ZMQ.ROUTER)
server.bind("tcp://*:5565")
// backend handles clients requests
val backend = context.socket(ZMQ.DEALER)
backend.bind("inproc://backend")
// files to dispatch given in arguments
args.toList.foreach { filepath =>
println(s"publish $filepath")
server.send("newfile".getBytes(), ZMQ.SNDMORE)
server.send(filepath.getBytes(), 0)
}
// multithreaded server: router hands out requests to DEALER workers via a inproc queue
val NB_WORKERS = 1
val workers = List.fill(NB_WORKERS)(new Thread(new ServerWorker(context)))
workers foreach (_.start)
ZMQ.proxy(server, backend, null)
}
class ServerWorker(ctx: ZMQ.Context) extends Runnable
{
override def run()
{
val worker = ctx.socket(ZMQ.DEALER)
worker.connect("inproc://backend")
while (true)
{
val zmsg = ZMsg.recvMsg(worker)
zmsg.pop // drop inner queue envelope (?)
val cmd = zmsg.pop //cmd is used to continue/stop
cmd.toString match {
case "get" =>
val file = zmsg.pop.toString
println(s"clientReq: cmd: $cmd , file:$file")
//1- brute force: ignore cmd and send full file in one go!
worker.send("eof".getBytes, ZMQ.SNDMORE) //header indicates this is the last chunk
val bytes = io.Source.fromFile(file).mkString("").getBytes //dirty read, for testing only!
worker.send(bytes, 0)
println(s"${bytes.size} bytes sent for $file: "+new String(bytes))
case x => println("cmd "+x+" not implemented!")
}
}
}
}
client
object FileHandler extends App
{
val context = ZMQ.context(1)
// client is notified of new files then fetches file from server
val client = context.socket(ZMQ.DEALER)
client.connect("tcp://*:5565")
val poller = new ZMQ.Poller(1) //"poll" responses
poller.register(client, ZMQ.Poller.POLLIN)
while (true)
{
poller.poll
val zmsg = ZMsg.recvMsg(client)
val cmd = zmsg.pop
val data = zmsg.pop
// header is the command/action
cmd.toString match {
case "newfile" => startDownload(data.toString)// message content is the filename to fetch
case "chunk" => gotChunk(data.toString, zmsg.pop.getData) //filename, chunk
case "eof" => endDownload(data.toString, zmsg.pop.getData) //filename, last chunk
}
}
def startDownload(filename: String)
{
println("got notification: start download for "+filename)
client.send("get".getBytes, ZMQ.SNDMORE) //command header
client.send(filename.getBytes, 0)
}
def gotChunk(filename: String, bytes: Array[Byte])
{
println("got chunk for "+filename+": "+new String(bytes)) //callback the user here
client.send("next".getBytes, ZMQ.SNDMORE)
client.send(filename.getBytes, 0)
}
def endDownload(filename: String, bytes: Array[Byte])
{
println("got eof for "+filename+": "+new String(bytes)) //callback the user here
}
}
On the client, you don't need PULL with DEALER.
DEALER is PUSH and PULL combined, so use DEALER only, your code will be simpler.
Same goes for the server, unless you're doing something special, you don't need PUSH with ROUTER, router is bidirectional.
the server worker gets the request, and writes the response back to
the inproc queue but the response never seems to go out of the server
(can't see it in wireshark) and the client is stuck on the poller.poll
awaiting the response.
Code Problems
In the server, you're dispatching files with args.toList.foreach before starting the proxy, this is probably why nothing is leaving the server. Start the proxy first, then use it; Also, once you call ZMQProxy(..), the code blocks indefinitely, so you'll need a separate thread to send the filepaths.
The client may have an issue with the poller. The typical pattern for polling is:
ZMQ.Poller items = new ZMQ.Poller (1);
items.register(receiver, ZMQ.Poller.POLLIN);
while (true) {
items.poll(TIMEOUT);
if (items.pollin(0)) {
message = receiver.recv(0);
In the above code, 1) poll until timeout, 2) then check for messages, and if available, 3) get with receiver.recv(0). But in your code, you poll then drop into recv() without checking. You need to check if the poller has messages for that polled socket before calling recv(), otherwise, the receiver will hang if there's no messages.

Resources