How to allow remote hold music through on a FreeSWITCH rtp proxy? - freeswitch

With our freeswich RTP proxy our users hear a silent sound when the remote party puts us on hold. We want to hear the remote system's hold music instead. We want to accept the remote party's hold music
We have a system that proxies calls through FreeSwitch to allow us to deal with Nat Traversal as well as do some transcoding in certain specific cases. We found that when our party put the remote party on hold it would play FreeSwitch's Hold music instead of our master PBX's hold music. When the remote party put our party on hold our party would also hear freeswitch's hold music.
To fix the issue when our party putting the remote party on hold and freeswitch's hold music we changed the hold music in the vars.xml file to this then made it so that our PBX would put the hold music to the remote party.
<X-PRE-PROCESS cmd="set" data="hold_music=indicate_hold"/>
```xml
When the remote party puts our caller on hold we expect to hear the hold music from the remote party's PBX. We currently hear silence.

You can do this two ways.
The first is to set disable-hold to true in the sip_profile the hold signaling is coming in on:
<param name="disable-hold" value="true"/>
See disable-hold information here.
And the second is to set rtp_disable_hold=true in the dialplan where that the call is routing through.
<action application="set" data="rtp_disable_hold=true"/>
See rtp_disable_hold=true information here.
The second option gives you flexibility to remove the option later on, or to not disable hold music in other scenarios where you may want to use Freeswitch hold music.

Related

How to make a Google Home (Mini) publish what it listens to a MQTT topic (and broker)?

I have a Google Home Mini and I'm trying to use it as a speech-to-text device. The way I intend to do so is by having the device listening to what is said and publishing that input to an MQTT broker in order to my application to listen to it.
I have found this, that returns the input as text, but all it gives me is the certainty I can get this data. I have little to no clue on how to make it publish this data as an MQTT message.
Also found this, but can't make it work, because it states "There’s a very easy way to recognize custom phrases in Google Assistant,[...] I won’t cover it here". And even the Google's instructions (open "Create an Applet") seems to be out-dated in relation to IFTTT, because the steps simply aren't followable in IFTTT's interface.
Here is a quick sketch of the architecture:
There're 5 arrows. The first one is, obviously, a physical process. Arrows "Audio" and "Text" are automatically done by the hardware. The right "MQTT Message" is working already. So what I wanted help with is the "MQTT Message" arrow from "Google Home" to "MQTT Broker".
Thanks in advance.
The short answer to this is you don't (as you've described it).
The slightly longer answer is that you first have to move the arrow you are interested into to the cloud and it's not a MQTT message.
The Action box needs to be hosted on a publicly accessable machine (e.g. AWS/GCP/Azure/IBM Cloud) so that the Google platform knows where to find it.
Google have 2 different types of actions, one for conversational type interactions and one for controlling smart homes devices. You've not mentioned what you are trying to do so I can't say which one you really want.
Google have recently announced the Local SDK for interacting with smart home devices that is slightly closer to the diagram you have included. This can only be used for device control and still can't send MQTT messages, it supports HTTP, raw UDP or TCP (you might be able to implement a MQTT client using the raw TCP, but it would be a lot of work and I'm not convinced the keep alive would work)
I think I got what you need:
Configure the Google assistant to parse your speech, then connect it to ifttt (as I already did it in the past, it's very easy) to send HTTP requests.
NOW create a local web server that understands these requests from ifttt, and publish them to your broker.
And that's all!

Personal Internet use monitoring

How could a (Windows) desktop application be created to monitor the amount of time spent on a particular website?
My first idea was to play with the Host file to intercept requests, log, and proxy. This feels a bit clunky; and I suspect my program would look like malware.
I feel like there is a smarter way? Any ideas?
There is a tool similar to what you are looking for called K-9 Web Protection. It is more used for parents to monitor what their kids are up to when hooked up to the internet. I have installed this for my niece's computer with good results and praises as it blocks, content filter, restrict internet times. This may be OTT for your needs but worth a shot as you can see what sites were visited.
The other, is to use a dedicated firewall monitoring solution such as IPCOP which is a Linux based distribution with a sole purpose in providing a proxy, stateful packet inspection (SPI) firewall, Intrusion Detection System (IDS).
Hope this helps,
Best regards,
Tom.
You could do this by monitoring active connections via netstat, or if you need more advanced data you can install The Windows Packet Capture Library and get any data about network use, and inside your desktop app, find network traffic that relates to 'spending time' on a website (which might just be GET requests for you, but I don't know), and record various statistics as required.
Route the traffic through a scriptable proxy and change the browser settings to point to that proxy.

is it reasonable to protect drm'd content client side

Update: this question is specifically about protecting (encipher / obfuscate) the content client side vs. doing it before transmission from the server. What are the pros / cons on going in an approach like itune's one - in which the files aren't ciphered / obfuscated before transmission.
As I added in my note in the original question, there are contracts in place that we need to comply to (as its the case for most services that implement drm). We push for drm free, and most content providers deals are on it, but that doesn't free us of obligations already in place.
I recently read some information regarding how itunes / fairplay approaches drm, and didn't expect to see the server actually serves the files without any protection.
The quote in this answer seems to capture the spirit of the issue.
The goal should simply be to "keep
honest people honest". If we go
further than this, only two things
happen:
We fight a battle we cannot win. Those who want to cheat will succeed.
We hurt the honest users of our product by making it more difficult to use.
I don't see any impact on the honest users in here, files would be tied to the user - regardless if this happens client or server side. This does gives another chance to those in 1.
An extra bit of info: client environment is adobe air, multiple content types involved (music, video, flash apps, images).
So, is it reasonable to do like itune's fairplay and protect the media client side.
Note: I think unbreakable DRM is an unsolvable problem and as most looking for an answer to this, the need for it relates to it already being in a contract with content providers ... in the likes of reasonable best effort.
I think you might be missing something here. Users hate, hate, hate, HATE DRM. That's why no media company ever gets any traction when they try to use it.
The kicker here is that the contract says "reasonable best effort", and I haven't the faintest idea of what that will mean in a court of law.
What you want to do is make your client happy with the DRM you put on. I don't know what your client thinks DRM is, can do, costs in resources, or if your client is actually aware that DRM can be really annoying. You would have to answer that. You can try to educate the client, but that could be seen as trying to explain away substandard work.
If the client is not happy, the next fallback position is to get paid without litigation, and for that to happen, the contract has to be reasonably clear. Unfortunately, "reasonable best effort" isn't clear, so you might wind up in court. You may be able to renegotiate parts of the contract in the client's favor, or you may not.
If all else fails, you hope to win the court case.
I am not a lawyer, and this is not legal advice. I do see this as more of a question of expectations and possible legal interpretation than a technical question. I don't think we can help you here. You should consult with a lawyer who specializes in this sort of thing, and I don't even know what speciality to recommend. If you're in the US, call your local Bar Association and ask for a referral.
I don't see any impact on the honest users in here, files would be tied to the user - regardless if this happens client or server side. This does gives another chance to those in 1.
Files being tied to the user requires some method of verifying that there is a user. What happens when your verification server goes down (or is discontinued, as Wal-Mart did)?
There is no level of DRM that doesn't affect at least some "honest users".
Data can be copied
As long as client hardware, standalone, can not distinguish between a "good" and a "bad" copy, you will end up limiting all general copies, and copy mechanisms. Most DRM companies deal with this fact by a telling me how much this technology sets me free. Almost as if people would start to believe when they hear the same thing often enough...
Code can't be protected on the client. Protecting code on the server is a largely solved problem. Protecting code on the client isn't. All current approaches come with stingy restrictions.
Impact works in subtle ways. At the very least, you have the additional cost of implementing client-side-DRM (and all follow-up cost, including the horde of "DMCA"-shouting lawyer gorillas) It is hard to prove that you will offset this cost with the increased revenue.
It's not just about code and crypto. Once you implement client-side DRM, you unleash a chain of events in Marketing, Public Relations and Legal. A long as they don't stop to alienate users, you don't need to bother.
To answer the question "is it reasonable", you have to be clear when you use the word "protect" what you're trying to protect against...
For example, are you trying to:
authorized users from using their downloaded content via your app under certain circumstances (e.g. rental period expiry, copied to a different computer, etc)?
authorized users from using their downloaded content via any app under certain circumstances (e.g. rental period expiry, copied to a different computer, etc)?
unauthorized users from using content received from authorized users via your app?
unauthorized users from using content received from authorized users via any app?
known users from accessing unpurchased/unauthorized content from the media library on your server via your app?
known users from accessing unpurchased/unauthorized content from the media library on your server via any app?
unknown users from accessing the media library on your server via your app?
unknown users from accessing the media library on your server via any app?
etc...
"Any app" in the above can include things like:
other player programs designed to interoperate/cooperate with your site (e.g. for flickr)
programs designed to convert content to other formats, possibly non-DRM formats
hostile programs designed to
From the article you linked, you can start to see some of the possible limitations of applying the DRM client-side...
The third, originally used in PyMusique, a Linux client for the iTunes Store, pretends to be iTunes. It requested songs from Apple's servers and then downloaded the purchased songs without locking them, as iTunes would.
The fourth, used in FairKeys, also pretends to be iTunes; it requests a user's keys from Apple's servers and then uses these keys to unlock existing purchased songs.
Neither of these approaches required breaking the DRM being applied, or even hacking any of the products involved; they could be done simply by passively observing the protocols involved, and then imitating them.
So the question becomes: are you trying to protect against these kinds of attack?
If yes, then client-applied DRM is not reasonable.
If no (for example, you're only concerned about people using your app, like Apple/iTunes does), then it might be.
(repeat this process for every situation you can think of. If the adig nswer is always either "client-applied DRM will protect me" or "I'm not trying to protect against this situation", then using client-applied DRM is resonable.)
Note that for the last four of my examples, while DRM would protect against those situations as a side-effect, it's not the best place to enforce those restrictions. Those kinds of restrictions are best applied on the server in the login/authorization process.
If the server serves the content without protection, it's because the encryption is per-client.
That being said, wireshark will foil your best-laid plans.
Encryption alone is usually just as good as sending a boolean telling you if you're allowed to use the content, since the bypass is usually just changing the input/output to one encryption API call...
You want to use heavy binary obfuscation on the client side if you want the protection to literally hold for more than 5 minutes. Using decryption on the client side, make sure the data cannot be replayed and that the only way to bypass the system is to reverse engineer the entire binary protection scheme. Properly done, this will stop all the kids.
On another note, if this is a product to be run on an operating system, don't use processor specific or operating system specific anomalies such as the Windows PEB/TEB/syscalls and processor bugs, those will only make the program even less portable than DRM already is.
Oh and to answer the question title: No. It's a waste of time and money, and will make your product not work on my hardened Linux system.

How to extract information from client/server communication with no documentation?

What are methods for undocumented client/server communication to be captured and analyzed for information you want and then have your program looking for this information in real time? For example, programs that look at online game client/server communication and get information and use it to do things like show location on a 3rd party map, etc.
Wireshark will allow you to inspect communication between the client-server (assuming you're running one of them on your machine). As you want to perform this snooping in your own application, look at WinPcap. Being able to reverse engineer the protocol is a whole other kettle of fish, mind.
In general, wireshark is an excellent recommendation for traffic/protocol analysis- however, you seem to be looking for something else:
For example, programs that look at online game client/server communication and get information and use it to do things like show location on a 3rd party map, etc.
I assume you are referring to multiplayer games and game servers?
If so, these programs are usually using a dedicated service connection to query the corresponding server for positional updates and other meta information on a different port, they don't actually intercept or inspect client/server communciations at realtime, and they don't really interfere with these updates, either.
So, you'll find that most game servers provide support for a very simply passive connection (i.e. output only), that's merely there for getting certain runtime state, which in turn is often simply polled by a corresponding external script/webpage.
Similarly, there's often also a dedicated administration interface provided on a different port, as well as another one that publishes server statistics, so that these can be easily queried for embedding neat stats in webpages.
Depending on the type of game server, these may offer public/anonymous use, or they may require certain credentials to access such a data port.
More complex systems will also allow you to subscribe only to specific state and updates, so that you can dynamically configure what data you are interested in.
So, even if you had complete documentation on the underlying protocol, you wouldn't really be able to directly inspect client/server communications without being in between these communications. This can however not be easily achieved. In theory, this would basically require a SOCKS proxy server to be set up and used by all clients, so that you can actually inspect the communications going on.
Programs like wireshark will generally only provide useful information for communications going on on your own machine/network, and will not provide any information about communications going on in between machines that you do not have access to.
In other words, even if you used wireshark to a) reverse engineer the protocol, b) come up with a way to inspect the traffic, c) create a positional map - all this would only work for those communications that you have access to, i.e. those that are taking place on your own machine/network. So, a corresponding online map would only show your own position.
Of course, there's an alternative: you can emulate a client, so that you are being provided with server-side updates from other clients, this will mostly have to be in spectator mode.
This in turn would mean that you are a passive client that's just consuming server-side state, but not providing any.
So that you can in turn use all these updates to populate an online map or use it for whatever else is on your mind.
This will however require your spectator/client to be connected to the server all the time, possibly taking up precious game server slots.
Some game servers provide dedicated spectator modes, so that you can observe the whole game play using a live feed. Most game servers will however automatically kick spectators after a certain idle timeout.

access remote video from program

I want to have a stress/performance testing for my content management site, especially for hosted streamed video part. I am using IIS to host the videos. More specifically, I am using the new Windows Server 2008 x64 and IIS 7.0.
The confusion is,
I plan to write code to start a lot of threads, and in each thread I will send web request to video URL, and read response stream from server, but I am not sure whether in this way, it behaves the same as a real user using player to render the video (in my code, I just read the stream, without really play it or write to anywhere). I want to test similar to the real scenario as much as possible;
I also plan to use real Media Player to render video (or what-so-ever media player), but my concern is if I start multiple Media Players on my test machine, since Media Player will utilize some H/W or some other resources (video card specific memory?) to decode/render the video (not sure, needs guru help to check and confirm), if I start multiple players, are there any potential H/W or resource contention between the players? If there is contention, it is also not actual ens user scenario, i.e. few user will start 100 players on his/her machine. :-)
Does anyone have any advice to me?
BTW: I prefer to use any .Net based solution, but not a must.
thanks in advance,
George
You should use mplayer. It has a lot of command line options. I don't know how all theses options are available under Windows, but under linux something like this is possible :
mplayer some_url -dump-video -dump-file=some_file
It will behave the same as a "normal" player I think, and your test machine won't need to handle hundreds of decompression thread, sot it fits your need 1 and 2
If you know the bit rate of your video stream, you can pace your downloading request to simulate video player clients. The bit rate can be calculated from the information carried in the stream, but it's a little more complicated. There is software for stressing testing video server too, such as this IP Video Monitor.

Resources