Free Switch with 10+ digit regex validation? - freeswitch

When we install Free-Switch... by default we will get 20 endpoints (1000.xml to 1019.xml).
If we want to create our own endpoints like +919885098850 or +16308045480. How can we do that?
Instead of creating static endpoints inside Free-Switch... how to create endpoints outside?
For ex: 1. We will create endpoints inside MySql database...
2. We will authenticate from SIP application 3. Then forward calls to Free -Switch 4. Free-Switch will forward call to destination endpoint.
Can some explain me how to achieve this scenario?

You can create as many extensions as you want. The extension number 1000 to 1019 are just example extensions.
what you need to do is just copy the 1000.xml and change the extension number from 1000 to 919885098850 and set a password for it.
reload the freeswitch : /etc/init.d/freeswitch restart
and then try to register your phone with
Username : 919885098850
password : password in your xml
Domain : your IP address
That's all, you can now register 10 digit or 11 digit or whatever length extensions.
Freeswitch authenticates the users or extensions on the basis of the directory users defined in the directory folder with names like 1000.xml or XXXXXXX.xml
If you want to dynamically create and auth this extensions , you got 2 ways.
Either you write a service which will read data from your database and create one XML file in this folder
Another and personally my preferred way is using xml_curl module.
Trust me XML_CURL is best thing, I invested many hours in RnD just like your question and then in hard way I learned about xml_curl and it saved my day.

Related

How does laravel-adldap2 determine a user match with local user table

Laravel 6. CentOS 7
I have an existing user table that was used with standard Laravel Auth.
I've converted to LDAP but if a user logs in, the system tries to create a new user rather than syncing the existing entry unless there is an objectguid. This fails of course because of duplicate constraints.
I have another app that I'm doing the same thing and it seems to be working as expected. I cant figure out what's different on the configurations outside of some column names are different so the sync config looks a little different.
How does ldap2 determine a match in the local table before creating a new one?
Thanks

MIP SDK: fail to create FileHandler with error "Content protected by on prem servers is unsupported"

We are developing an application to open and edit protected PDF files using the MIP SDK (we're currently using version 1.6.103).
So far, we were able to open files protected with different versions of Microsoft protection, including MicrosoftIRMServices version 1.
We are now hitting a problem with one of our customers. They keep their files on a SharePoint 2016 directory, which is configured to automatically add protection to all files uploaded. All their environment is on-premise and AD RMS Service is used for protection. They do not have Azure IP on server side.
When we download the resulting file and try to open, we create a mipns::FileEngine and then invoke CreateFileHandlerAsync() to create a mipns::FileHandler. This call fails with the following mipns::NetworkError:
NetworkError : Content protected by on prem servers is unsupported., NetworkError.Category=FailureResponseCode, HttpRequest.SanitizedUrl=https://api.aadrm.com/my/v2/enduserlicenses,
As the error suggests, I suspect the issue is with the usage of an on-premise protection.
I thought it might be resolved following the instructions at
https://learn.microsoft.com/en-us/information-protection/develop/quick-app-adrms#configuring-protection-api-in-c-to-use-ad-rms
so, following those instructions, I created the FileEngine with
ProtectionEngine::Settings engineSettings("", authDelegate, "");
engineSettings.SetProtectionCloudEndpointBaseUrl("http://<my server>/_wmcs/licensing");
but so far no success, although the error has changed and is now
NetworkError : The protection service is unavailable., NetworkError.Category=FailureResponseCode, HttpRequest.SanitizedUrl=https://<my server>/my/v1/enduserlicenses,
(where of course <my server> is replaced with a local service)
Am I going in the wrong direction? If not, perhaps I am using the wrong endpoint? How can I find the endpoint URL to be passed to SetProtectionCloudEndpointBaseUrl as suggested in the linked page?
Thanks
This is likely caused by a missing MDE install or MDE SRV record. You'll need to validate that mobile device extensions for AD RMS has been deployed and configured. If it has, you'll also need to validate that the SRV record is in place for any mail suffixes your customer is using. For example, if the RMS service is at RMS.FABRIKAM.COM, but your customer email addresses are #Contoso.com, you'd need an SRV record that looks like _rmsdisco._http._tcp.contoso.com which would then point to the server at RMS.FABRIKAM.COM.
The base URL isn't used in consumption scenarios. It's only for publishing. That said, looks like you've set the _wmcs endpoint, but we expect only the base for AD RMS:
ProtectionCloudEndpointBaseUrl = "https://rms.contoso.com"
That's only required when you don't provide a mip::Identity object when creating the file engine. If you do provide the identity, we'll use the domain suffix to look up the DNS record and chase that referral.

coturn cannot find credentials of user

I was trying to deploy a simple TURN server using coturn.
When I test it on Trickle ICE (turn:rtc.jackxujh.me:3478 [webrtc:mighty]), Trickle ICE says "Authentication failed?".
The coturn server keeps reporting this error:
ERROR: check_stun_auth: Cannot find credentials of user
Here is the complete turnserver.conf I am using (by uncommenting lines of the coturn sample conf):
external-ip=39.108.74.114/XXX.XXX.XXX.XXX #(XXX is internal IP)
fingerprint
lt-cred-mech
use-auth-secret
static-auth-secret=XXXXXXXX... #(XXX is the secret)
realm=rtc.jackxujh.me
user=webrtc:0xXXXXXXXX... #(XXX is the key)
cert=/etc/letsencrypt/live/rtc.jackxujh.me/cert.pem
pkey=/etc/letsencrypt/live/rtc.jackxujh.me/privkey.pem
mobility
I find a related discussion on GitHub, but I don't feel there is a solution at the end.
In fact, I am confused whether my conf file is using TURN REST API or not.
Meanwhile, I tried to check if there was a user named webrtc in turndb, by using # turnadmin -l, but the output was nothing. (Is this command correct?)
In fact, I am confused whether my conf file is using TURN REST API or not.
I can confirm You use REST API because use-auth-secret is set
use-auth-secret
So you need to use a unixtimestamp as username, and the hashed password..
user=timestamp:userid
password=base64(hmac(secret key, user)
Read more about the difference of Long-Term-Credential and REST:
https://www.ietf.org/proceedings/87/slides/slides-87-behave-10.pdf
If you want to use normal username/password use the long-term-credential so remove use-auth-secret
and set it statically or in db
user=username1:key1
turnadmin
turnadmin -l
list static and db users.
So in case of REST is correct the empty list.

Image hosting with Dropbox in Ruby

Hi I'm trying to use Dropbox as an imaging hosting server. The framework is currently Ruby.
I tried to use /media or /shares to get the data's url. When I use "/media" command, it seems the image can be hosted for a couple of hours but at the end it expires(4? hours roughly). If possible, I want to know how to set the expiration dates for this ( at something like 1 year? ), not manually, but in a programmable manner.
[How I extract "url" in Ruby]
image_1_link = Drop_client.media('path_name')["url"]
pdf_link = Drop_client.shares(''path_name'["url"]
The below is a sample link after using "/media".
"https://dl.dropboxusercontent.com/1/view/e9lr642qerdmgm9/Apps/ringle_records/images/Test_1image_1?dl=1"
When I use "/share", the generated link looks like "https://db.tt/Zs0v4yaffal?dl=1". But it can't host the image if I use it as . If I put this type of address, it leads to dropbox page and ask manual downloading.
I want to know how to generate link for imaging hosting so that can work for 1 year or at least for a couple of months!
Thanks for reading my question!
/media links expire after 4 hours, as documented.
/shares takes a parameter, short_url, that can be set to false to return an unshortened URL. Unfortunately, I believe the v1 Ruby SDK doesn't support this parameter. You can either modify the Ruby SDK yourself or get the shortened URL and unshorten it yourself. (Make an HTTP GET request to the short URL and grab the Location header of the redirect response.)
Once you have an unshortened URL, you can modify it per www.dropbox.com/help/201 to get a link directly to the content. Specifically, you should use the raw=1 query parameter.

How to add tomcat virtual hosts instances programatically

I was working for the last 2 years on building a social network for companies using Grails.
A new requirement appeared which is creating separate virtual host for each company that will have it's own database of users, timelines, etc (I would like to avoid rewriting all the service layer)
So initially the application was running on http://www.my-social-network.com for example
Now using an admin console that we will have to develop, companies should be able to create their own subdomain like this : http://company1.my-social-network.com and so on.
The web server that we are using is Apache 2.2 + tomcat 6
Is there someone who has an idea about how to do it?
Ideally I want to have one instance of the application that receives requests with different host names so it can behave differently in order to save resources because Grails consumes too much memory.
For example :
subdomain1.my-social-network.com --> apache 2 --> my-social-network.com (+ specific headers) --> tomcat
If such thing was possible, is there a way to select a datasource depending on a request parameter or header?
Any help is appreciated
There are a number of different options you can take, but first you need to make a decision on how you are going to implement this at the lowest level:
You can take the requests to subdomain1.my-social-network.com and redirect the user to my-social-network.com.
Same as above but use HTTP 302, HTTP 303 or HTTP 307 instead.
Simply show the contents of the site, responding with HTTP 200 (probably the best approach as these domains are meant to be permanent). Further text assumes this option.
Next, you need to have a servlet filter which intercepts all HTTP traffic and has a map {virtual_path -> real_site}. This filter can simply set relevant request attribute (hint: servletRequest.setAttribute(String, Object)) when it detects that requested virtual path is recognized.
If a user creates/renames/deletes a domain/virtual path, you would populate the map accordingly.
Finally, your render component should check that parameter and render relevant site. It is really hard to elaborate further without knowing more details on how your application works.

Resources