Freeswitch ESL library - freeswitch

I would like understand the data format of Freeswitch ESL library method getBody and also from the ESl how to get the media bugs list on channel issuing the command api uuid_buglist .
My concern is that I can issue the command but how to read the data that comes out is my problem.
Please help.

esl_event_get_body() is a very simple wrapper function that returns event->body from an event.
To get the reply from your cmd you should use handle.last_sr_event->body after using esl_send_recv(&handle, cmd)

you can see my open source project, it is a auto dial project:
https://github.com/nwaycn/nway_ac
here is plain the message of hangup
con = ESLconnection(fs_ip, fs_esl_port, fs_esl_auth)
if con.connected():
thread.start_new_thread(AutoCall,(1,1))
e = con.events('plain','CHANNEL_HANGUP_COMPLETE')
while True:
ee = con.recvEvent()
#print ee
if ee:
my_number = ee.getHeader('Caller-Caller-ID-Number')
dest_number = ee.getHeader('Caller-Destination-Number')
SetNumberIdle(dest_number)
con.disconnect();
here is to call a phone
def CallOut(dial_string,call_number):
con = ESLconnection(fs_ip, fs_esl_port, fs_esl_auth)
if con.connected():
e = con.api(dial_string)
SetNumberBusy(call_number)
print e.getBody()
else:
print 'not Connected'
con.disconnect();

Related

How can I send messages to specific client using Faye Websockets?

I've been working on a web application which is essentially a web messenger using sinatra. My goal is to have all messages encrypted using pgp and to have full duplex communication between clients using faye websocket.
My main problem is being able to send messages to a specific client using faye. To add to this all my messages in a single chatroom are saved twice for each person since it is pgp encrypted.
So far I've thought of starting up a new socket object for every client and storing them in a hash. I do not know if this approach is the most efficient one. I have seen that socket.io for example allows you to emit to a specific client but not with faye websockets it seems ? I am also considering maybe using a pub sub model but once again I am not sure.
Any advice is appreciated thanks !
I am iodine's author, so I might be biased in my approach.
I would consider naming a channel by the used ID (i.e. user1...user201983 and sending the message to the user's channel.
I think Faye will support this. I know that when using the iodine native websockets and builtin pub/sub, this is quite effective.
So far I've thought of starting up a new socket object for every client and storing them in a hash...
This is a very common mistake, often seen in simple examples.
It works only in single process environments and than you will have to recode the whole logic in order to scale your application.
The channel approach allows you to scale using Redis or any other Pub/Sub service without recoding your application's logic.
Here's a quick example you can run from the Ruby terminal (irb). I'm using plezi.io just to make it a bit shorter to code:
require 'plezi'
class Example
def index
"Use Websockets to connect."
end
def pre_connect
if(!params[:id])
puts "an attempt to connect without credentials was made."
return false
end
return true
end
def on_open
subscribe channel: params[:id]
end
def on_message data
begin
msg = JSON.parse(data)
if(!msg["to"] || !msg["data"])
puts "JSON message error", data
return
end
msg["from"] = params[:id]
publish channel: msg["to"].to_s, message: msg.to_json
rescue => e
puts "JSON parsing failed!", e.message
end
end
end
Plezi.route "/" ,Example
Iodine.threads = 1
exit
To test this example, use a Javascript client, maybe something like this:
// in browser tab 1
var id = 1
ws = new WebSocket("ws://localhost:3000/" + id)
ws.onopen = function(e) {console.log("opened connection");}
ws.onclose = function(e) {console.log("closed connection");}
ws.onmessage = function(e) {console.log(e.data);}
ws.send_to = function(to, data) {
this.send(JSON.stringify({to: to, data: data}));
}.bind(ws);
// in browser tab 2
var id = 2
ws = new WebSocket("ws://localhost:3000/" + id)
ws.onopen = function(e) {console.log("opened connection");}
ws.onclose = function(e) {console.log("closed connection");}
ws.onmessage = function(e) {console.log(e.data);}
ws.send_to = function(to, data) {
this.send(JSON.stringify({to: to, data: data}));
}.bind(ws);
ws.send_to(1, "hello!")

Reading serial output from Pyserial doesnt work reliably

I am connecting to device using code blow on MacOS and out of 100 times this code would make connection only 1 or two times and dones't respond(since there is no timeout) rest of the times.
ser = serial.Serial(port="/dev/xyz",timeout = None, baudrate=115200, parity = serial.PARITY_NONE, bytesize = serial.EIGHTBITS, stopbits = serial.STOPBITS_ONE)
def exitSer(ser):
print("Closing")
ser.close()
atexit.register(exitSer, ser)
if ser.is_open:
time.sleep(2)
while(1):
print(ser.readline().decode("utf-8"))
Could you please tell me how to use programs like fcntl etc to find if this port is completely free and available for use and how to set tty port's flags to free after making port free forcibly.
Once this works, I have to run this multithreaded where each thread is running different devices expecting output in lines. Any suggestions for that just in case this works.
def startSerial(tty_id):
ser = serial.Serial(port = tty_id, timeout = None)
ser.close()
ser.open()
if ser.isOpen():
print(ser.portstr, ":connection successful.")
return ser
else:
return False
Calling ser.close() before .open() fixed it. I tested it about 200 times and i haven't been dissappointed so far. I am testing it now in multithreaded and hopefully that works too.
Thank you everybody.

pyzmq recv_json can't decode message sent by send_json

Here is my code with the extraneous stuff stripped out:
coordinator.py
context = zmq.Context()
socket = context.socket(zmq.ROUTER)
port = socket.bind_to_random_port(ZMQ_ADDRESS)
poller = zmq.Poller()
poller.register(socket, zmq.POLLIN)
while True:
event = poller.poll(1)
if not event:
continue
process_id, val = socket.recv_json()
worker.py
context = zmq.Context()
socket = context.socket(zmq.DEALER)
socket.connect('%s:%s' % (ZMQ_ADDRESS, kwargs['zmq_port']))
socket.send_json(
(os.getpid(), True)
)
what happens when I run it:
process_id, val = socket.recv_json()
File "/Users/anentropic/.virtualenvs/myproj/lib/python2.7/site-packages/zmq/sugar/socket.py", line 380, in recv_json
return jsonapi.loads(msg)
File "/Users/anentropic/.virtualenvs/myproj/lib/python2.7/site-packages/zmq/utils/jsonapi.py", line 71, in loads
return jsonmod.loads(s, **kwargs)
File "/Users/anentropic/.virtualenvs/myproj/lib/python2.7/site-packages/simplejson/__init__.py", line 451, in loads
return _default_decoder.decode(s)
File "/Users/anentropic/.virtualenvs/myproj/lib/python2.7/site-packages/simplejson/decoder.py", line 406, in decode
obj, end = self.raw_decode(s)
File "/Users/anentropic/.virtualenvs/myproj/lib/python2.7/site-packages/simplejson/decoder.py", line 426, in raw_decode
raise JSONDecodeError("No JSON object could be decoded", s, idx)
JSONDecodeError: No JSON object could be decoded: line 1 column 0 (char 0)
and if I dig in with ipdb:
> /Users/anentropic/.virtualenvs/myproj/lib/python2.7/site-packages/zmq/sugar/socket.py(380)recv_json()
379 msg = self.recv(flags)
--> 380 return jsonapi.loads(msg)
381
ipdb> p msg
'\x00\x9f\xd9\x06\xa2'
hmm, that doesn't look like JSON... is this a bug in pyzmq? am I using it wrong?
Hmm, ok, found the answer.
There is an annoying asymmetry in the ØMQ interface, so you have to be aware of the type of socket you are using.
In this case my use of ROUTER/DEALER architecture means that the JSON message sent from the DEALER socket, when I do send_json, gets wrapped in multipart message envelope. The first part is a client id (I guess this is the '\x00\x9f\xd9\x06\xa2' that I got above) and the second part is the JSON string we are interested in.
So in the last line of my coordinator.py I need to do this instead:
id_, msg = socket.recv_multipart()
process_id, val = json.loads(msg)
IMHO this is bad design on the part of ØMQ/pyzmq, the library should abstract this away and have just send and recv methods, that just work.
I got the clue from this question How can I use send_json with pyzmq PUB SUB so it looks like PUB/SUB architecture has the same issue, and no doubt others too.
This is described in the docs but it's not very clear
http://zguide.zeromq.org/page:all#The-Asynchronous-Client-Server-Pattern
Update
In fact, I found in my case I could simplify the code further, by making use of the 'client id' part of the message envelope directly. So the worker just does:
context = zmq.Context()
socket = context.socket(zmq.DEALER)
socket.identity = str(os.getpid()) # or I could omit this and use ØMQ client id
socket.connect('%s:%s' % (ZMQ_ADDRESS, kwargs['zmq_port']))
socket.send_json(True)
It's also worth noting that when you want to send a message the other direction, from the ROUTER, you have to send it as multipart, specifying which client it is destined for, eg:
coordinator.py
context = zmq.Context()
socket = context.socket(zmq.ROUTER)
port = socket.bind_to_random_port(ZMQ_ADDRESS)
poller = zmq.Poller()
poller.register(socket, zmq.POLLIN)
pids = set()
while True:
event = poller.poll(1)
if not event:
continue
process_id, val = socket.recv_json()
pids.add(process_id)
# need some code in here to decide when to stop listening
# and break the loop
for pid in pids:
socket.send_multipart([pid, 'a string message'])
# ^ do your own json encoding if required
I guess there is probably some ØMQ way of doing a broadcast message rather than sending to each client in a loop as I do above. I wish the docs just had a clear description of each available socket type and how to use them.

Can't send a message from xmpp4r?

I'm trying to test sending a message to one jid account by using xmpp4r:
require 'xmpp4r'
include Jabber
jid = JID::new('alice#wonderland.lit')
password = 'secr3t'
cl = Client::new(jid)
cl.connect('166.78.7.179')
cl.auth(password)
cl.send(Presence.new)
to = 'arthur#wonderland.lit'
subject = 'XMPP4R test'
body = 'Hi, this is a XMPP4R test'
m = Message::new( to, body ).set_type(:chat).set_id('1').set_subject(subject)
cl.send m
But I always get the following exception:
/home/subout/.rvm/gems/ruby-1.9.3-p374#subout/gems/xmpp4r-0.5/lib/xmpp4r/client.rb:118:in `rescue in auth': closed stream (Jabber::ClientAuthenticationFailure)
from /home/subout/.rvm/gems/ruby-1.9.3-p374#subout/gems/xmpp4r-0.5/lib/xmpp4r/client.rb:108:in `auth'
from send_message2.rb:9:in `<main>'
First of all, would you please add Jabber::debug = true setting
before cl.connect and post output here?
Secondly, it looks like there is a problem with XMPP server (are you sure it’s running at
'166.78.7.179'?)
And, the last but not the least, why do you decide
to use “obsolete” xmpp4r rather than it’s modern successor
Blather?

Connecting to Yahoo! mail from Ruby

I try to connect to mail Yahoo! account from Ruby using both net/imap and net/pop. But I randomly get error EOFile (from IMAP) or Connection Refused/Reset by peer (from POP). Has anybody tried to connect to Yahoo! Mail and had some experiences about it?
There's a bug in ruby's net/imap library that is exposed when connecting to Yahoo.
The fix is straightforward and described here:
http://redmine.ruby-lang.org/issues/4509
Basically, edit imap.rb and change the inner loop of search_response method from:
token = lookahead
case token.symbol
when T_CRLF
break
when T_SPACE
shift_token
end
data.push(number)
to:
token = lookahead
case token.symbol
when T_CRLF
break
when T_SPACE
shift_token
else
data.push(number)
end
then test with the following code:
require 'net/imap'
Net::IMAP.debug = true
conn = Net::IMAP.new('imap.mail.yahoo.com', 143, false)
conn.instance_eval { send_command('ID ("GUID" "1")') }
conn.authenticate('LOGIN', ARGV[0], ARGV[1] )
conn.select("INBOX")
uids = conn.uid_search(['ALL'])
puts uids.join(',')
conn.logout
conn.disconnect

Resources