when does freeswitch stop transmitting voice streams to asr engine? - freeswitch

i'm new to freeswitch.I meet some problems. Does freeswitch recogine the pause then stop transmitting voice streams.Or asr engine recogine the pause.

Yes freeswitch does recognize when audio streaming occurs. This is handled when an audio threshold (noise gate) is met.
After the threshold is met you freeswitch has an energy level parameter in its mod_conference config file that you can adjust to define your ideal threshold level.
The intent of this parameter is to distinguish if a user is talking if they are in a noisy area.

Related

How to solve problem of high data rates of flow, hits switches that does not have flowrule for that flow, in openflow mininet?

I am using iperf traffic generation and hard timeout as extension to simple_switch_13.py code in mininet with RYU SDN. I am using linear topology with 8 switches. I set the hard timeout to 5 seconds.
I am working with only one flow. I started the iperf traffic between two hosts(let's say h1 to h7. the terms used are same as terms used in mininet linear topology) for 10 seconds. When the flow started arp packets packets are generated in the network. After that a arp reply from h7 is sent to h1 which creates seven packet in messges from (s7, s6, ... , s1) and respective flowrule is installed in the switches and finally reaches h1. Then h1 sends tcp flow to h7 which also creates seven packet in messages from (s1, s2, ... , s7) and respective flow rule is installed in the switches and reaches h7. So far everything worked fine.
But once the timeout(5 seconds) is completed the flow rule in the switches is deleted. because flow is still in network what actually should happen is controller should send one packet-in message and buffer the rest of the packets so that when the respective flow rule is installed in flow table of the switch then the buffered packets will use the installed flow rule to pass the switch. But that is not happening. The controller is getting a lot of "packet-in" messages before the flow rule get installed into the switch(every packet that came to switch is coming to controller). What might be the reason for the lot of packet in messages. Is the buffers of the switch not working fine (but i am getting packet-in messages with some buffer_id). How to solve this issue?
This is also happening with idle-timeout and udp flow in the starting.(i.e. when h1 starts communication with h7) the switches along the path are generating lot of packet-in messages.
After doing a lot of research I understand that it is not problem with hard timeout or idle timeout. It is happening when a flow with high data rate hits the switch and switch didn't have that flow rule, then it is sending a lot of packet-in messages for the same flow. It is not queuing(or may be not storing the rest of the flow packets after sending one packet-in for that respective flow) those packets. How to solve this issue in mininet?

How to test N_As, N_Ar timeout parameters in CanTp protocol using CAPL script or or any other possible way?

As part of CanTp protocol related tests, I have been trying to test N_As and N_Ar timeout errors, where N_AsMax = 1000ms and N_ArMax = 1000ms.
Is it possible to create the N_As and N_Ar timeouts with CANalyzer and/or using CAPL?
It would be great help, if you can share a possible way to test these timing parameters using CANalyzer or CANoe.
CanTP is a protocol to extendend the maximum data length (in bytes) of any given CAN data frame over the traditional 8 bytes, please refer to ISO 15765-2. Here you can have Single Frames, or Multi-Frames, which are trains of related frames each one carrying a portion of the overall PDU. A flow control frame is sent, usually by the receiver, to address and instruct the transmitter on the protocol to be used for frame splitting.
According to docs,
N_Ar [is the] Time for transmission of the CAN frame (any N-PDU) on
the receiver side (see ISO 15765-2)
N_As [is the] Time for transmission of the CAN frame (any N-PDU) on
the sender side (see ISO 15765-2).
In addition, the following requirements are relevant:
[SWS_CanTp_00075] ⌈If the transmit confirmation is not received after
a maximum time (equal to N_As), the CanTp module shall act as if it
had received an unsuccessful transmission confirmation and any late
confirmation shall be ignored. The CanTp module shall cancel
(internally) the failed transmission. ⌋ ( )
[SWS_CanTp_00311] ⌈In case of N_Ar timeout occurrence (no confirmation
from CAN driver for any of the FC frame sent) the CanTp module shall
abort reception and notify the upper layer of this failure by calling
the indication function PduR_CanTpRxIndication() with the result
E_NOT_OK. ⌋ ( )
Coming back to your question:
Is it possible to create the N_As and N_Ar timeouts with CANalyzer and/or using CAPL?
Yes, by means of the osek_tp.dll file that you should have in your local CANoe install (I'm using CANoe v10.0). Examples on how to use it are well documented in the help document AN-IND-1-012_CAPL_Callback_Interface.pdf, again it should be distributed in your CANoe install folder.
According to that document,
Basically, the OSEK_TP.DLL implements fault injection functionality
that has to be enabled explicitly in order to prevent unintentional
usage. Once activated, it is possible to setup a specific fault on a
connection that is executed during the next data transfer.
I'd urge to give it a read, and refer to linked documentation as well. I hope this is pointing you in the rigth direction.
Additional info:
Transmitting data over ISO-TP in CANoe using CAPL

Broadcasting and Fetch Data On UART

I have a computer (that use as server) and several board with Atmega microcontroller Something like:
The computer connect to board on UART & RS485 (with a USB to RS485 converter)(I have limitation that lead to I could not use ModBus). I want to broadcast a message from server over bus and fetch the ID from of each board (Board ID is 4 digit).
When the boards receive broadcast message and try to send their own ID and the server receive some fake ID and I think it related to Collisions problem when all boards want to send data in one time.
After I search about this problem found a way that put a constant in each board that save a special delay for send data and when board receive broadcast message send ID with that delay...in this way it work fine and I dont see the Collisions but have some problem:
May be the delay number of 2 board be same.
Good way for small count of board.
Extra process when want to install board on bus.
Anybody know with this problem and could help me how to solve this problem with better solution?
You are mentioning Modbus in your question although some of your other stated facts seem to deviate from there (like 4-digit device numbers, and Modbus only has 1-255). Also, Modbus does not support responses to broadcast messages. I thus doubt a bit you are actually using Modbus.
A scheme you could use (and that is classically used in MA networks) would be:
Once a broadcast is received, have the clients scan the bus for responses for a time frame based on its station ID. If your client can see one, have it wait a minimum bus time (the time a module needs to answer your broadcast message based on current bus timing + the round trip for the master acknowledging the broadcast answer) plus an additional time based on its module ID, then go back to (1)
If a client sees the bus unoccupied for the specified time, send back a broadcast answer.
Have the master acknowledge the broadcast response from this client with the shortest possible message.
If a client that has sent a broadcast response does not receive a proper ack, go back to (1)
This is not 100% secure and absolutely not according to the Modbus specification, but could work.
* is a transmission, - is a "wait"
**** (Bus master broadcast)
--------- station 100 waits 100ms
------------------ station 200 waits 200ms
**** Station 100 sends broadcast response
------------------ station 200 sees bus active and waits another 200ms
*** master acknowledges broadcast response of 100
------------------ station 200 sees bus active again and waits 200ms from last seen activity
**** Station 200 has seen bus quiet for 200ms and sends broadcast response
*** master acks brc response of 200
This can take quite a bit of time and needs the waiting times finely adjusted against the transmission time of broadcast responses and response acks, but can work, and actually is implemented that way in a lot of CSMA/CD networks.
It will probably take longer, but here is another way to do it. First, design your protocol so that each command contains (or an can contain) an ID, and boards only respond to commands for their ID. Then, on your host, you would iterate through each of the possible IDs and send a simple command to each of them. If you get a response, you know there is a board with that ID. If you don't get a response after some period of time, you know there is no board there.

iBeacon: When to send beacon event to a server

Am working on iBeacon app where I monitor and range beacon however, when the app start ranging for a beacon in region I get endless list of beacon range status as long as the user in the beacon range.
My question is when to send the server the beacon proximity!
And if someone could explain the optimal way to queue and send list of beacons events to web server! it will be much appreciated.
The optimal way to send beacon proximity events to the server all depends on your business use case. Here are a few common options:
Send an event whenever a new beacon identifier is first detected, along with the proximity at that time.
Send an event periodically (say every 10 minutes) with a full list of beacons seen during that period along with their minimum/maximum proximities over that period.
Send an event whenever the proximity crosses a threshold (e.g. send an event only when a unique beacon identifier first becomes in near or immediate proximity).
Implementing the above on iOS often involves tracking the detections in a Dictionary, and then triggering the server call at the appropriate logical time from the didRangeBeacons:inRegion callback based on what has been tracked so far in this dictionary. Using logic to implement 1, 2 or 3 above will ensure that the number of server calls will be limited.

Is it possible or even allowed to implement QoS on server and client? Live555

I have been looking at RTSP of Live555 and seems they are following RTSP as per definition form IETF. So far they seem to have reporting for transmission (data sent) on the server end, and reception (data received ) client end.
I am wondering is it possible to implement send/receive statistics (QoS) reports for both the client and the server? I am in a requirement to gather statistics of data sent and received at both the server and the client.
I am new to Live555 and the documentation is pretty obscure in this aspect so any direction is appreciated!
Thanks
For the client side, a example could be found from openRTSP test programs.
openRTSP could display QOS client information :
Outputting QOS statistics
Use the "-Q" option to output QOS ("quality of service") statistics
about the data stream (when the program exits). These statistics
include the (minimum, average, maximum) bitrate, packet loss rate, and
inter-packet gap. The "-Q" option takes an optional
parameter, which specifies the length of the
time intervals - in multiples of 100ms - over which the "minimum,
average, maximum" statistics are computed. The default value of this
parameter is "10", meaning that these statistics are measured every 1
second (i.e., 10x100ms).
For the server side, you can get the QOS informations from the RTPSink::transmissionStatsDB().

Resources