coturn cannot find credentials of user - turn

I was trying to deploy a simple TURN server using coturn.
When I test it on Trickle ICE (turn:rtc.jackxujh.me:3478 [webrtc:mighty]), Trickle ICE says "Authentication failed?".
The coturn server keeps reporting this error:
ERROR: check_stun_auth: Cannot find credentials of user
Here is the complete turnserver.conf I am using (by uncommenting lines of the coturn sample conf):
external-ip=39.108.74.114/XXX.XXX.XXX.XXX #(XXX is internal IP)
fingerprint
lt-cred-mech
use-auth-secret
static-auth-secret=XXXXXXXX... #(XXX is the secret)
realm=rtc.jackxujh.me
user=webrtc:0xXXXXXXXX... #(XXX is the key)
cert=/etc/letsencrypt/live/rtc.jackxujh.me/cert.pem
pkey=/etc/letsencrypt/live/rtc.jackxujh.me/privkey.pem
mobility
I find a related discussion on GitHub, but I don't feel there is a solution at the end.
In fact, I am confused whether my conf file is using TURN REST API or not.
Meanwhile, I tried to check if there was a user named webrtc in turndb, by using # turnadmin -l, but the output was nothing. (Is this command correct?)

In fact, I am confused whether my conf file is using TURN REST API or not.
I can confirm You use REST API because use-auth-secret is set
use-auth-secret
So you need to use a unixtimestamp as username, and the hashed password..
user=timestamp:userid
password=base64(hmac(secret key, user)
Read more about the difference of Long-Term-Credential and REST:
https://www.ietf.org/proceedings/87/slides/slides-87-behave-10.pdf
If you want to use normal username/password use the long-term-credential so remove use-auth-secret
and set it statically or in db
user=username1:key1
turnadmin
turnadmin -l
list static and db users.
So in case of REST is correct the empty list.

Related

MIP SDK: fail to create FileHandler with error "Content protected by on prem servers is unsupported"

We are developing an application to open and edit protected PDF files using the MIP SDK (we're currently using version 1.6.103).
So far, we were able to open files protected with different versions of Microsoft protection, including MicrosoftIRMServices version 1.
We are now hitting a problem with one of our customers. They keep their files on a SharePoint 2016 directory, which is configured to automatically add protection to all files uploaded. All their environment is on-premise and AD RMS Service is used for protection. They do not have Azure IP on server side.
When we download the resulting file and try to open, we create a mipns::FileEngine and then invoke CreateFileHandlerAsync() to create a mipns::FileHandler. This call fails with the following mipns::NetworkError:
NetworkError : Content protected by on prem servers is unsupported., NetworkError.Category=FailureResponseCode, HttpRequest.SanitizedUrl=https://api.aadrm.com/my/v2/enduserlicenses,
As the error suggests, I suspect the issue is with the usage of an on-premise protection.
I thought it might be resolved following the instructions at
https://learn.microsoft.com/en-us/information-protection/develop/quick-app-adrms#configuring-protection-api-in-c-to-use-ad-rms
so, following those instructions, I created the FileEngine with
ProtectionEngine::Settings engineSettings("", authDelegate, "");
engineSettings.SetProtectionCloudEndpointBaseUrl("http://<my server>/_wmcs/licensing");
but so far no success, although the error has changed and is now
NetworkError : The protection service is unavailable., NetworkError.Category=FailureResponseCode, HttpRequest.SanitizedUrl=https://<my server>/my/v1/enduserlicenses,
(where of course <my server> is replaced with a local service)
Am I going in the wrong direction? If not, perhaps I am using the wrong endpoint? How can I find the endpoint URL to be passed to SetProtectionCloudEndpointBaseUrl as suggested in the linked page?
Thanks
This is likely caused by a missing MDE install or MDE SRV record. You'll need to validate that mobile device extensions for AD RMS has been deployed and configured. If it has, you'll also need to validate that the SRV record is in place for any mail suffixes your customer is using. For example, if the RMS service is at RMS.FABRIKAM.COM, but your customer email addresses are #Contoso.com, you'd need an SRV record that looks like _rmsdisco._http._tcp.contoso.com which would then point to the server at RMS.FABRIKAM.COM.
The base URL isn't used in consumption scenarios. It's only for publishing. That said, looks like you've set the _wmcs endpoint, but we expect only the base for AD RMS:
ProtectionCloudEndpointBaseUrl = "https://rms.contoso.com"
That's only required when you don't provide a mip::Identity object when creating the file engine. If you do provide the identity, we'll use the domain suffix to look up the DNS record and chase that referral.

Dredd can't find my API documentation, how do i tell it where it is if it's not on my local drive (it's on apiary.io server)

I am using the Dredd tool to test my API (which resides on apiary.io).
Question
I would like to provide dredd with a path to my documentation (it even asks for it), however my API doc is on apiary.io but i don't know the exact url that points to it. What would be the correct way to provide dredd with the API path?
What did work (but not what i'm looking for)
Note: I tried downloading the api to my local drive and providing dredd with a local path to the file (yml or apib) which works fine (yay!), but i would like to avoid keeping a local copy and simply providing dredd with the location of my real API doc which is being maintained on the apiary server.
How do I do this (without first fetching the file to local drive)?
Attempts to solve this that failed
I also read (and tried) on the following topics, they may be relevant but i wasn't successful in resolving the issue
- Using authentication token as environment variable
- Providing the domain provided by apiary.io//settings to dredd
- Providing the in the dredd command
all of these attempts still produces the same result, Dredd has no idea where to find the API document unless i provide a path in my local computer to the file (which i have to download or create manually on my computer first).
Any help is appreciated, Thanks!
If I understand it correctly, you would like to use dredd and feed it using the API description document residing on Apiar.io platform, right?
If so, you should be able to do that simply calling the init command with the right options:
dredd init -r apiary -j apiaryApiKey:privateToken -j apiaryApiName:sasdasdasd
You can find the private token going into the Test section of the target API (you'll find the button on the application header).
Let me know if this solves the problem for you - I'll make sure to propagate this and document it accordingly on our help page
P.S: You can also use your own reporter - in that case, simply omit -r apiary when writing the command line parameters.
You can feed Dredd not only with a path to file on your disk, but also with an URL.
If your API in Apiary is public, the API description document (in this case API Blueprint) should have a public URL. For example, if you go to http://docs.apiblueprintapi.apiary.io/, you can see on the left there is a Download link. Unfortunately, the link is visible only for users who do not have access to the editor of the API, so you can’t see the link if you’re owner of the API. Try to log out from Apiary and the link should appear:
Then you can feed Dredd with the link:
$ dredd 'http://docs.apiblueprintapi.apiary.io/api-description-document' 'http://example.com:8080/api'
I agree this isn’t very intuitive and since you’re not the first one to come up with this, I think we’ll think of some ways how to make it easier.
If your API isn't public then unfortunately there's no way to get the URL as of now. However, you can either use GitHub Sync or Apiary CLI to get the file on your disk in an automated manner.

How can I authorize a Google Service Account without the default credentials file?

I have a Google Service Account that my app uses to retrieve data from Google Analytics.
When I created the account I downloaded a client_secrets file with all the necessary information for authorization via OAuth, and I recorded the path to this file in an environment variable called GOOGLE_APPLICATION_CREDENTIALS as per Google's documentation.
I can now get an authenticated client like this:
authorization = Google::Auth.get_application_default(scopes)
This method reads the credentials out of the file, which works locally, but my app is hosted on Heroku where file storage is impossible.
The documentation states that I can either provide this file (can’t), run my app on an official Google Service (won’t), or experience an error.
How can I authenticate my service account without the client_secrets file?
I found the answer in the source code of the google-auth-library-ruby gem.
It turns out that there is another option: take the values from the client_secrets file and put them in environment variables named GOOGLE_ACCOUNT_TYPE, GOOGLE_CLIENT_ID, GOOGLE_CLIENT_EMAIL and GOOGLE_PRIVATE_KEY respectively.
If these keys are populated, the credentials will load from there. Not a whisper of this in the docs, though.
Since this is one of the main results that returns when searching google for "google service credentials ruby," I thought I would add my very recent experience to the list of possible answers.
Though you can do the method mentioned in the first answer, I found an alternate solution that works well with Heroku. I know it has been somewhat mentioned in another post, but the key thing that was left out was how to properly store the full GOOGLE_APPLICATION_CREDENTIALS .json file so that it can all be kept within one env on Heroku and not have special characters blow up your app when tryin to
I detail my steps below:
Obtain your GOOGLE_APPLICATION_CREDENTIALS json file by following Google's instructions here: Getting Started with Authentication
That will, of course, contain a json object with all the spaces, line returns, and quotations that heroku simply doesn't need. So, strip out all spaces and line breaks...AND PAY ATTENTION HERE -> EXCEPT FOR THE LINE BREAKS WITHIN THE 'BEGIN PRIVATE KEY' SEGMENT. Basically turn the json into one long string. Use whatever method you feel comfortable with.
Once you have a single line json file with whitespace and line breaks removed, you will need to add it to Heroku by running the following command:
heroku config:set GOOGLE_APPLICATION_CREDENTIALS="$(< /Users/whoever/Downloads/[CREDENTIAL_JSON_FILENAME].json)" --app your-app
For my situation, I needed to have the service account available on initialization, so I placed this in an initializer in my rails app:
GOOGLE_SERVICE_ACCOUNT_CREDENTIALS=Google::Auth::ServiceAccountCredentials.make_creds(
json_key_io: StringIO.new(ENV['GOOGLE_APPLICATION_CREDENTIALS'])
)
Notice the StringIO.new() method. the #make_creds wants a file. So, fake it as such by using StringIO.new.
This method works perfectly.
If you need this to work differently on your local machine, you can always store the .json somewhere in the project and reference it through a file location string. Here is my full initializer:
require 'googleauth'
#https://www.rubydoc.info/github/google/google-auth-library-ruby/Google/Auth/ServiceAccountCredentials
if Rails.env == "test"
GOOGLE_SERVICE_ACCOUNT_CREDENTIALS =
Google::Auth::ServiceAccountCredentials.make_creds(
json_key_io: File.open('lib/google/google_application_credentials.json')
)
elsif Rails.env != "development"
GOOGLE_SERVICE_ACCOUNT_CREDENTIALS =
Google::Auth::ServiceAccountCredentials.make_creds(
json_key_io: StringIO.new(ENV['GOOGLE_APPLICATION_CREDENTIALS'])
)
end
If you are using a gem like dotenv you can store the formatted json string as an ENV or you can just reference the file location in the ENV
I hope this helps someone.
I found this
require "google/cloud/bigquery"
ENV["BIGQUERY_PROJECT"] = "my-project-id"
ENV["BIGQUERY_CREDENTIALS"] = "path/to/keyfile.json"
bigquery = Google::Cloud::Bigquery.new
more detail:
https://github.com/googleapis/google-cloud-ruby/blob/master/google-cloud-bigquery/AUTHENTICATION.md

Can I declare credentials only once for a REST API?

I am using Power Query within Power BI Designer to query a REST API. The first request is to:
http://domain/httpAuth/app/rest/server
which returns:
<server>
<builds href="/httpAuth/app/rest/builds"/>
</server>
From there I use Power Query to query http://domain/httpAuth/app/rest/builds in order to get a list of builds and then iterate over the list of builds, calling each one in turn. The format of the URL for each build is:
http://domain/httpAuth/app/rest/builds/id:buildId
The problem is I'm getting prompted to enter credentials for every single request. This is tedious and unworkable (we have a lot of builds).
Is there a way to define the credentials once for (say) stub http://domain/httpAuth/app/rest and have every resource under that stub use the same credentials?
At the moment there is no direct way to do this for HTTP sources. A workaround for now is to connect to the root source first (http://domain/httpAuth/app/rest/builds or just http://domain/) and set the credentials there.
If you trust all of the data sources you are connecting to, you can also disable the firewall by going to the Workbook Settings dialog and selecting the Ignore option for Fast Combine.
EDIT: Sorry, I misread the question. In the case of credentials, connect to the root source first and set the credential there. This credential should be used for the remaining URLs.
I believe you can set an Authorization Header and set it with your request.
(Apologies for for the Wiki link - http://en.wikipedia.org/wiki/Basic_access_authentication)

Get the complete username a new username

When using NetUserAdd the user will be created on the local computer or as a domain account depending the role of the server where you used this function.
I want to retrieve the complete username (LOCALCOMPUTER\USERNAME or DOMAIN\USERNAME) to use it remotely.
Is there a function to do this?
Caveat: I haven't checked the solution.
You may call NetGetJoinInformation to know if the machine belongs to a domain and NetServerGetInfo if the code is running on a DC.
After those test you may get the machine name (GetComputerName) and domain name (NetWkstaGetInfo) and whatever you need.
Be careful if you are doing this on a cluster.
I'm sure I'm missing something, but can't you use GetUserNameEx and pass in the desired EXTENDEND_NAME_FORMAT? I believe NameSamCompatible should be the format you desire.
You'll get back either MachineName\UserName or DomainName\UserName

Resources