MIP SDK: fail to create FileHandler with error "Content protected by on prem servers is unsupported" - microsoft-information-protection

We are developing an application to open and edit protected PDF files using the MIP SDK (we're currently using version 1.6.103).
So far, we were able to open files protected with different versions of Microsoft protection, including MicrosoftIRMServices version 1.
We are now hitting a problem with one of our customers. They keep their files on a SharePoint 2016 directory, which is configured to automatically add protection to all files uploaded. All their environment is on-premise and AD RMS Service is used for protection. They do not have Azure IP on server side.
When we download the resulting file and try to open, we create a mipns::FileEngine and then invoke CreateFileHandlerAsync() to create a mipns::FileHandler. This call fails with the following mipns::NetworkError:
NetworkError : Content protected by on prem servers is unsupported., NetworkError.Category=FailureResponseCode, HttpRequest.SanitizedUrl=https://api.aadrm.com/my/v2/enduserlicenses,
As the error suggests, I suspect the issue is with the usage of an on-premise protection.
I thought it might be resolved following the instructions at
https://learn.microsoft.com/en-us/information-protection/develop/quick-app-adrms#configuring-protection-api-in-c-to-use-ad-rms
so, following those instructions, I created the FileEngine with
ProtectionEngine::Settings engineSettings("", authDelegate, "");
engineSettings.SetProtectionCloudEndpointBaseUrl("http://<my server>/_wmcs/licensing");
but so far no success, although the error has changed and is now
NetworkError : The protection service is unavailable., NetworkError.Category=FailureResponseCode, HttpRequest.SanitizedUrl=https://<my server>/my/v1/enduserlicenses,
(where of course <my server> is replaced with a local service)
Am I going in the wrong direction? If not, perhaps I am using the wrong endpoint? How can I find the endpoint URL to be passed to SetProtectionCloudEndpointBaseUrl as suggested in the linked page?
Thanks

This is likely caused by a missing MDE install or MDE SRV record. You'll need to validate that mobile device extensions for AD RMS has been deployed and configured. If it has, you'll also need to validate that the SRV record is in place for any mail suffixes your customer is using. For example, if the RMS service is at RMS.FABRIKAM.COM, but your customer email addresses are #Contoso.com, you'd need an SRV record that looks like _rmsdisco._http._tcp.contoso.com which would then point to the server at RMS.FABRIKAM.COM.
The base URL isn't used in consumption scenarios. It's only for publishing. That said, looks like you've set the _wmcs endpoint, but we expect only the base for AD RMS:
ProtectionCloudEndpointBaseUrl = "https://rms.contoso.com"
That's only required when you don't provide a mip::Identity object when creating the file engine. If you do provide the identity, we'll use the domain suffix to look up the DNS record and chase that referral.

Related

coturn cannot find credentials of user

I was trying to deploy a simple TURN server using coturn.
When I test it on Trickle ICE (turn:rtc.jackxujh.me:3478 [webrtc:mighty]), Trickle ICE says "Authentication failed?".
The coturn server keeps reporting this error:
ERROR: check_stun_auth: Cannot find credentials of user
Here is the complete turnserver.conf I am using (by uncommenting lines of the coturn sample conf):
external-ip=39.108.74.114/XXX.XXX.XXX.XXX #(XXX is internal IP)
fingerprint
lt-cred-mech
use-auth-secret
static-auth-secret=XXXXXXXX... #(XXX is the secret)
realm=rtc.jackxujh.me
user=webrtc:0xXXXXXXXX... #(XXX is the key)
cert=/etc/letsencrypt/live/rtc.jackxujh.me/cert.pem
pkey=/etc/letsencrypt/live/rtc.jackxujh.me/privkey.pem
mobility
I find a related discussion on GitHub, but I don't feel there is a solution at the end.
In fact, I am confused whether my conf file is using TURN REST API or not.
Meanwhile, I tried to check if there was a user named webrtc in turndb, by using # turnadmin -l, but the output was nothing. (Is this command correct?)
In fact, I am confused whether my conf file is using TURN REST API or not.
I can confirm You use REST API because use-auth-secret is set
use-auth-secret
So you need to use a unixtimestamp as username, and the hashed password..
user=timestamp:userid
password=base64(hmac(secret key, user)
Read more about the difference of Long-Term-Credential and REST:
https://www.ietf.org/proceedings/87/slides/slides-87-behave-10.pdf
If you want to use normal username/password use the long-term-credential so remove use-auth-secret
and set it statically or in db
user=username1:key1
turnadmin
turnadmin -l
list static and db users.
So in case of REST is correct the empty list.

aspnetboilerplate Shared cookie invalid with services.AddDataProtection()

I have the following scenario:
Server A:abpWeb;
Server B:abpWeb;
A and B are based on MyCompanyName.AbpZero template, abp. Net core version 3.1.1;aspnetboilerplate
Browser access A:abpWeb and B:abpWeb. But after logging in, cookie shared is invalid.
A:User.Identity?.IsAuthenticated equals true after Browser access A:Login;
But refresh B:/index on the browser,B:User.Identity?.IsAuthenticated equals false;
The same browser domain for A and B is the same.
I created two new ASP.NET Core 2.0 MVC apps with ASP.NET Core Identity, using AddDataProtection for the normal shared cookie is ok.
I referred to:
https://learn.microsoft.com/en-us/aspnet/core/security/cookie-sharing?tabs=aspnetcore2x
I am searching for a long time on net. But no use. Please help or try to give some ideas how to achieve this.
Thanks in advance.
The keys that encrypt/decrypt your cookies are probably trying to be written to an invalid folder.
By default AddDataProtection tries to write these keys to:
%LOCALAPPDATA%\ASP.NET\DataProtection-Keys
As long as there is an environment variable being used to create the keys path, you will need to set the following config file setting to true.
Please also see my other answer here:
IIS - AddDataProtection PersistKeysToFileSystem not creating
Fix: Within %WINDIR%\System32\inetsrv\config\applicationHost.config set setProfileEnvironment=true. I think you have to restart IIS as well.

Is it possible to create External user profiles in IBM Connections using the ProfileAdminService?

I've been able to create new profiles in IBM Connections 5 using the ProfileAdminService but can't find any documentation on how to flag them as External.
The Social business toolkit doesn't expose the isExternal flag via the Profile object. I've tried to set it manually by
profile.setAsString("snx:isExternal","true");
or
profile.setAsString("isExternal","true");
but the created profile always end up being a normal/internal one.
Is this possible yet via the API?
Thanks
I figured this out over the weekend.
You CAN add external users using the connections ProfileAdminService but you CAN't do it yet using Social Business Toolkit (functionality not there yet)
To make it work I created my own build of SBT and added "userMode" to the ProfileAttributes. Caught me out initially as was looking for isExternal. Should have guessed it was mode as that's the name in the TDI assembly
com.ibm.sbt.services.client.connections.profiles.utils.ProfilesConstants
public enum ProfileAttribute {
GUID("guid", "com.ibm.snx_profiles.base.guid"),
EMAIL("email", "com.ibm.snx_profiles.base.email"),
UID("uid", "com.ibm.snx_profiles.base.uid"),
DISTINGUISHED_NAME("distinguishedName", "com.ibm.snx_profiles.base.distinguishedName"),
DISPLAY_NAME("displayName", "com.ibm.snx_profiles.base.displayName"),
GIVEN_NAMES("givenNames", "com.ibm.snx_profiles.base.givenNames"),
SURNAME("surname", "com.ibm.snx_profiles.base.surname"),
USER_STATE("userState", "com.ibm.snx_profiles.base.userState"),
USER_MODE("userMode","com.ibm.snx_profiles.base.userMode") // <<<added this line
;
`

Download build drop from hosted Team Foundation Service

Using the hosted Team Foundation Service at tfs.visualstudio.com, one has the option in a Build Definition to "Copy build output to the server" which creates a zip of the drop folder that can be downloaded over https via team web access. I really need to download this drop automatically, so I can chain input to the next stage in my build pipeline.
Unfortunately, the drop URL is not obvious, but can be created using the TfsDropDownloader.
TL;DR - I can't get the TfsDropDownloader to work, I'm hoping someone else has used this tool or a similar method to succesfully download a drop from https://tfs.visualstudio.com
Using the command line TfsDropDownloader.exe I can do this:
TfsDropDownloader.exe /c:"https://MYPROJECTNAME.visualstudio.com/DefaultCollection" /t:"ProjectName" /b:"BuildDefinitionName" /u:username /p:password
...and get an empty zip file with the correct build label name of the last successful build e.g. BuildDefinitionName_20130611.1.zip
Running the source code in the debugger, this is because the URL that is generated for downloading:
https://tflonline.visualstudio.com/DefaultCollection/_apis/resources/containers/804/drop/BuildDefinitionName_20130611.1.zip
..returns a content type of application/json, which is unsupported. This exception is swallowed by the application, but not before the empty zip file is created.
Is it possible the REST API on Team Foundation Service has changed in some way so the generated URL is no longer correct?
Note that I am using the "alternate credentials" defined on my Team Foundation Service account (i.e. not my live ID) - using anything else gets me TF30063: not authorized.
I got it working by using alternate credentials, but I also had to access the REST API via a different path.
The current TfsDropDownloader builds a URL that looks like this:
https://project.visualstudio.com/DefaultCollection/_apis/resources/containers/804/drop/BuildDefinitionName_20130611.1.zip
This returns empty JSON whenever I try to use it. I'm definitely authenticated, because if I tweak the URL to:
https://project.visualstudio.com/DefaultCollection/_apis/resources/containers/804/drop
I get a nice JSON listing of every single file in the drop, but no zip.
From spying on the SSL traffic to https://tfs.visualstudio.com with Fiddler I saw that clicking the "Download drop as zip" link I can see that there is another endpoint at:
https://project.visualstudio.com/DefaultCollection/ProjectName/_api/_build/ItemContent?buildUri=vstfs%3a%2f%2f%2fBuild%2fBuild%2f639&path=%2Fdrop
...which does give you a zip. The "vstfs%3a%2f%2f%2fBuild%2fBuild%2f639" portion is the URL encoded BuildUri.
So I've changed my version of GetServerPath in the TfsDropDownloader source to do this:
private static string GetServerPath(TfsConnection collection, IBuildDetail buildDetail)
{
var downloadPath = string.Format("{0}{1}/_api/_build/ItemContent?buildUri={2}&path=%2Fdrop",
collection.Uri,
HttpUtility.UrlPathEncode(buildDetail.TeamProject),
HttpUtility.UrlEncode(buildDetail.Uri.ToString()));
return downloadPath;
}
This works for me for the time being. Hopefully this helps someone else with the same problem!

How entity edit URL from within plug-in in MS Dynamics CRM 4.0

I would like to have a workflow create a task, then email the assigned user that they have a new task and include a link to the newly created task in the body of the email. I have client side code that will correctly create the edit URL, using the entities GUID and stores it in a custom attribute. However, when the task is created from within a workflow, the client script isn't run.
So, I think a plug-in should work, but I can't figure out how to determine the URL of the CRM installation. I'm authoring this in a test environment and definitely don't want to have to change things when I move to production. I'm sure I could use a config file, but seems like the plug-in should be able to figure this out at runtime.
Anyone have any ideas how to access the URL of the crm service from within a plug-in? Any other ideas?
There is no simple way to do this. However, there is one.
The MSCRM_Config is the deployment database that handle physical deployment properties, like the URL from which users are accessing the CRM deployment. The url that you might want is the one stored in "ADWebApplicationRootDomain", in the MSCRM_CONFIG.dbo.DeploymentProperties table. You may need some permission to access this database.
Note that this doesn't work in a deployment that is an Internet Facing Deployment.
Another way could be to query the discovery service to retrieve the same information (in the case that you are on the Online edition of MSCRM4).
What do you mean by "change things"?
If you create a custom workflow assembly, you can give it a server url input. Once you register it with CRM, you can simply type in the server url when you configure the workflow. You'll have to update the url for any workflows that use the custom workflow assembly once you move to production, but you'll only have to do that once.
My apologies if this is what you meant you wanted to avoid.
Edit: Sounds like you may be able to use the CustomConfiguration attribute when you register the plugin. Here's some more info.
http://blogs.msdn.com/crm/archive/2008/10/24/storing-configuration-data-for-microsoft-dynamics-crm-plug-ins.aspx
String Url = ((string)(Registry.LocalMachine.OpenSubKey(
"Software\\Microsoft\\MSCRM").GetValue("ServerUrl"))
).Replace("MSCRMServices", "");

Resources