CORS error with UnityWebRequest from ASP.NET Web API 6.0 with CORS enabled - asp.net-web-api

I'm working on a WebGL game with Unity in which I am attempting to call a custom built ASP.NET Web API that's built specifically to work with the game. I've added the code I thought was necessary to get CORS working for making the API requests from the browser, but I keep getting the same error: "Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource."
I've tried several different variations of how to implement CORS with the API in the program.cs script. I tried:
builder.Services.AddCors(p => p.AddPolicy("corspolicy", build =>
{
build.WithOrigins("*").AllowAnyMethod().AllowAnyHeader();
}));
And
builder.Services.AddCors(o =>
{
o.AddPolicy("corspolicy", build =>
build.WithOrigins("*")
.AllowAnyMethod()
.AllowAnyHeader()
});
Of course, I also included:
app.UseHttpsRedirection();
app.UseCors();
app.UseAuthorization();
app.MapControllers();
app.Run();
According to the documentation and the tutorials I followed, it looks like I added the correct code in the correct places, but when I publish the API to Azure and try to call it from the WebGL game, I always get that same error that I listed above.
I also tried using the AddDefaultPolicy() with just UseCors(), but that had the same result.
I've tried plugging the URL into Postman and HTTPie along with the header: "Origin": "http://127.0.0.1:5500", to mimic the game running on my local machine, and when I include that, it does return an "Acces-Control-Allow-Origin" header in the response set to either "*" or "http://127.0.0.1:5500" (I've tried both), depending on what I used in WithOrigins().
In Unity I'm making the API call like this:
public void MakeAPIRequest()
{
_webRequest = CreateRequest($"https://apiurl...");
_APICallType = APICallType.login;
_webRequestAsyncOperation = _webRequest.SendWebRequest();
_webRequestAsyncOperation.completed += GetRequestAsyncOperation_completed;
}
private UnityWebRequest CreateRequest(string path, RequestType type = RequestType.GET, object data = null)
{
var request = new UnityWebRequest(path, type.ToString());
if (data != null)
{
var bodyRaw = Encoding.UTF8.GetBytes(JsonUtility.ToJson(data));
request.uploadHandler = new UploadHandlerRaw(bodyRaw);
}
request.downloadHandler = new DownloadHandlerBuffer();
request.SetRequestHeader("Content-Type", "application/json");
return request;
}
I'm currently at a loss for how to get this working, so if anyone here has any advice about what to try or look into, I would really appreciate it.

Related

Authenticating a Xamarin Android app using Azure Active Directory fails with 401 Unauthorzed

I am trying to Authenticate a Xamarin Android app using Azure Active Directory by following article here:
https://blog.xamarin.com/authenticate-xamarin-mobile-apps-using-azure-active-directory/
I have registered a native application with AAD; note that i havent given it any additional permissions beyond creating it.
Then i use the below code to authenticate the APP with AAD
button.Click += async (sender, args) =>
{
var authContext = new AuthenticationContext(commonAuthority);
if (authContext.TokenCache.Count > 0)
authContext = new AuthenticationContext(authContext.TokenCache.ReadItems().GetEnumerator().Current.Authority);
authResult = await authContext.AcquireTokenAsync(graphResourceUri, clientId, returnUri, new PlatformParameters(this));
SetContentView(Resource.Layout.Main);
doGET("https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/OPSLABRG/providers/Microsoft.Compute/virtualMachines/LABVM?api-version=2015-08-01", authResult.AccessToken);
};
private string doGET(string URI, String token)
{
Uri uri = new Uri(String.Format(URI));
// Create the request
var httpWebRequest = (HttpWebRequest)WebRequest.Create(uri);
httpWebRequest.Headers.Add(HttpRequestHeader.Authorization, "Bearer " + token);
httpWebRequest.ContentType = "application/json";
httpWebRequest.Method = "GET";
// Get the response
HttpWebResponse httpResponse = null;
try
{
httpResponse = (HttpWebResponse)httpWebRequest.GetResponse();
}
catch (Exception ex)
{
Toast.MakeText(this, "Error from : " + uri + ": " + ex.Message, ToastLength.Long).Show();
return null;
}
}
This seems to be getting a token when using a Work account.
Using a valid hotmail account throws error A Bad Request was received.
However the main problem is when i try to retrieve VM details using REST.
the REST GET method fails with 401 Unauthorized error even when using the Work account.
I am not sure if the code is lacking something or if i need to give some additional permissions for the App. This needs to be able to support authenticating users from other tenants to get VM details.
Any guidance is appreciated.
note that i havent given it any additional permissions beyond creating
it.
This is the problem here.
In order for you to call the Azure Management API https://management.azure.com/, you must first register your application to have permissions to call this API.
You can do that as a part of your app registration like so:
Only at that point, will your app be authorized to call ARM, and your calls should start to work.
According to your description, I checked this issue on my side. As Shawn Tabrizi mentioned that you need to assign the delegated permission for accessing ARM Rest API. Here is my code snippet, you could refer to it:
var context = new AuthenticationContext($"https://login.windows.net/{tenantId}");
result = await context.AcquireTokenAsync(
"https://management.azure.com/"
, clientId, new Uri("{redirectUrl}"), platformParameter);
I would recommend you using Fiddler or Postman to simulate the request against ARM with the access_token to narrow this issue. If any errors, you could check the detailed response for troubleshooting the cause.
Here is my test for retrieving the basic information of my Azure VM:
Additionally, you could leverage jwt.io for decoding your access_token and check the related properties (e.g. aud, iss, etc.) as follows to narrow this issue.

Get "API key is missing" error when querying account details to Mailchimp API 3.0 using RestSharp

When using RestSharp to query account details in your MailChimp account I get a "401: unauthorized" with "API key is missing", even though it clearly isn't!
We're using the same method to create our RestClient with several different methods, and in all requests it is working flawlessly. However, when we're trying to request the account details, meaning the RestRequest URI is empty, we get this weird error and message.
Examples:
private static RestClient CreateApi3Client(string apikey)
{
var client = new RestClient("https://us2.api.mailchimp.com/3.0");
client.Authenticator = new HttpBasicAuthenticator(null, apiKey);
return client;
}
public void TestCases() {
var client = CreateApi3Client(_account.MailChimpApiKey);
var req1 = new RestRequest($"lists/{_account.MailChimpList}/webhooks", Method.GET);
var res1 = client.Execute(req1); // works perfectly
var req2 = new RestRequest($"automations/{account.MailChimpTriggerEmail}/emails", Method.GET);
var res2 = client.Execute(req2); // no problem
var req3 = new RestRequest(Method.GET);
var res3 = client.Execute(req3); // will give 401, api key missing
var req4 = new RestRequest(string.Empty, Method.GET);
var res4 = client.Execute(req4); // same here, 401
}
When trying the api call in Postman all is well. https://us2.api.mailchimp.com/3.0, GET with basic auth gives me all the account information and when debugging in c# all looks identical.
I'm trying to decide whether to point blame to a bug in either RestSharp or MailChimp API. Has anyone had a similar problem?
After several hours we finally found what was causing this..
When RestSharp is making the request to https://us2.api.mailchimp.com/3.0/ it's opting to omit the trailing '/'
(even if you specifically add this in the RestRequest, like: new RestRequest("/", Method.GET))
so the request was made to https://us2.api.mailchimp.com/3.0
This caused a serverside redirect to 'https://us2.api.mailchimp.com/3.0/' (with the trailing '/') and for some reason this redirect scrubbed away the authentication header.
So we tried making a
new RestRequest("/", Method.GET)
with some parameters (req.AddParameter("fields", "email")) to make it not scrub the trailing '/', but this to was failing.
The only way we were able to "fool" RestSharp was to write it a bit less sexy like:
new RestRequest("/?fields=email", Method.GET)

Auto-updates to Electron

I'm looking to deploy an auto-update feature to an Electron installation that I have, however I am finding it difficult to find any resources on the web.
I've built a self contained application using Adobe Air before and it seemed to be a lot easier writing update code that effectively checked a url and automatically downloaded and installed the update across Windows and MAC OSX.
I am currently using the electron-boilerplate for ease of build.
I have a few questions:
How do I debug the auto update feature? Do I setup a local connection and test through that using a local Node server or can I use any web server?
In terms of signing the application I am only looking to run apps on MAC OSX and particularly Windows. Do I have to sign the applications in order to run auto-updates? (I managed to do this with Adobe Air using a local certificate.
Are there any good resources that detail how to implement the auto-update feature? As I'm having difficulty finding some good documentation on how to do this.
I am also new to Electron but I think there is no simple auto-update from electron-boilerplate (which I also use). Electron's auto-updater uses Squirrel.Windows installer which you also need to implement into your solution in order to use it.
I am currently trying to use this:
https://www.npmjs.com/package/electron-installer-squirrel-windows
And more info can be found here:
https://github.com/atom/electron/blob/master/docs/api/auto-updater.md
https://github.com/squirrel/squirrel.windows
EDIT: I just opened the project to try it for a while and it looks it works. Its pretty straightforward. These are pieces from my gulpfile.
In current configuration, I use electron-packager to create a package.
var packager = require('electron-packager')
var createPackage = function () {
var deferred = Q.defer();
packager({
//OPTIONS
}, function done(err, appPath) {
if (err) {
gulpUtil.log(err);
}
deferred.resolve();
});
return deferred.promise;
};
Then I create an installer with electron-installer-squirrel-windows.
var squirrelBuilder = require('electron-installer-squirrel-windows');
var createInstaller = function () {
var deferred = Q.defer();
squirrelBuilder({
// OPTIONS
}, function (err) {
if (err)
gulpUtil.log(err);
deferred.resolve();
});
return deferred.promise;
}
Also you need to add some code for the Squirrel to your electron background/main code. I used a template electron-squirrel-startup.
if(require('electron-squirrel-startup')) return;
The whole thing is described on the electron-installer-squirrel-windows npm documentation mentioned above. Looks like the bit of documentation is enough to make it start.
Now I am working on with electron branding through Squirrel and with creating appropriate gulp scripts for automation.
You could also use standard Electron's autoUpdater module on OS X and my simple port of it for Windows: https://www.npmjs.com/package/electron-windows-updater
I followed this tutorial and got it working with my electron app although it needs to be signed to work so you would need:
certificateFile: './path/to/cert.pfx'
In the task config.
and:
"build": {
"win": {
"certificateFile": "./path/to/cert.pfx",
"certificatePassword": "password"
}
},
In the package.json
Are there any good resources that detail how to implement the auto-update feature? As I'm having difficulty finding some good documentation on how to do this.
You don't have to implement it by yourself. You can use the provided autoUpdater by Electron and just set a feedUrl. You need a server that provides the update information compliant to the Squirrel protocol.
There are a couple of self-hosted ones (https://electronjs.org/docs/tutorial/updates#deploying-an-update-server) or a hosted service like https://www.update.rocks
Question 1:
I use Postman to validate that my auto-update server URLs return the response I am expecting. When I know that the URLs provide the expected results, I know I can use those URLs within the Electron's Auto Updater of my Application.
Example of testing Mac endpoint with Postman:
Request:
https://my-server.com/api/macupdates/checkforupdate.php?appversion=1.0.5&cpuarchitecture=x64
JSON Response when there is an update available:
{
"url": "https:/my-server.com/updates/darwin/x64/my-electron=app-x64-1.1.0.zip",
"name": "1.1.0",
"pub_date": "2021-07-03T15:17:12+00:00"
}
Question 2:
Yes, your Electron App must be code signed to use the auto-update feature on Mac. On Windows I'm not sure because my Windows Electron app is code signed and I did not try without it. Though it is recommended that you sign your app even if the auto-update could work without it (not only for security reasons but mainly because otherwise your users will get scary danger warnings from Windows when they install your app for the first time and they might just delete it right away).
Question 3:
For good documentation, you should start with the official Electron Auto Updater documentation, as of 2021-07-07 it is really good.
The hard part, is figuring out how to make things work for Mac. For Windows it's a matter of minutes and you are done. In fact...
For Windows auto-update, it is easy to setup - you just have to put the RELEASES and nupkg files on a server and then use that URL as the FeedURL within your Electron App's autoUpdater. So if your app's update files are located at https://my-server.com/updates/win32/x64/ - you would point the Electron Auto Updater to that URL, that's it.
For Mac auto-update, you need to manually specify the absolute URL of the latest Electron App .zip file to the Electron autoUpdater. So, in order to make the Mac autoUpdater work, you will need to have a way to get a JSON response in a very specific format. Sadly, you can't just put your Electron App's files on your server and expect it to work with Mac just like that. Instead, the autoUpdater needs a URL that will return the aforementioned JSON response. So to do that, you need to pass Electron's Auto Updater feedURL the URL that will be able to return this expected kind of JSON response.
The way you achieve this can be anything but I use PHP just because that's the server I already paid for.
So in summary, with Mac, even if your files are located at https://my-server.com/updates/darwin/x64/ - you will not provide that URL to Electron's Auto Updater FeedURL. Instead will provide another URL which returns the expected JSON response.
Here's an example of my main.js file for the Electron main process of my App:
// main.js (Electron main process)
function registerAutoUpdater() {
const appVersion = app.getVersion();
const os = require('os');
const cpuArchitecture = os.arch();
const domain = 'https://my-server.com';
const windowsURL = `${domain}/updates/win32/x64`;
const macURL = `${domain}/api/macupdates/checkforupdate.php?appversion=${appVersion}&cpuarchitecture=${cpuArchitecture}`;
//init the autoUpdater with proper update feed URL
const autoUpdateURL = `${isMac ? macURL : windowsURL}`;
autoUpdater.setFeedURL({url: autoUpdateURL});
log.info('Registered autoUpdateURL = ' + (isMac ? 'macURL' : 'windowsURL'));
//initial checkForUpdates
autoUpdater.checkForUpdates();
//Automatic 2-hours interval loop checkForUpdates
setInterval(() => {
autoUpdater.checkForUpdates();
}, 7200000);
}
And here's an example of the checkforupdate.php file that returns the expected JSON response back to the Electron Auto Updater:
<?php
//FD Electron App Mac auto update API endpoint.
// The way Squirrel.Mac works is by checking a given API endpoint to see if there is a new version.
// If there is no new version, the endpoint should return HTTP 204. If there is a new version,
// however, it will expect a HTTP 200 JSON-formatted response, containing a url to a .zip file:
// https://github.com/Squirrel/Squirrel.Mac#server-support
$clientAppVersion = $_GET["appversion"] ?? null;
if (!isValidVersionString($clientAppVersion)) {
http_response_code(204);
exit();
}
$clientCpuArchitecture = $_GET["cpuarchitecture"] ?? null;
$latestVersionInfo = getLatestVersionInfo($clientAppVersion, $clientCpuArchitecture);
if (!isset($latestVersionInfo["versionNumber"])) {
http_response_code(204);
exit();
}
// Real logic starts here when basics did not fail
$isUpdateVailable = isUpdateAvailable($clientAppVersion, $latestVersionInfo["versionNumber"]);
if ($isUpdateVailable) {
http_response_code(200);
header('Content-Type: application/json;charset=utf-8');
$jsonResponse = array(
"url" => $latestVersionInfo["directZipFileURL"],
"name" => $latestVersionInfo["versionNumber"],
"pub_date" => date('c', $latestVersionInfo["createdAtUnixTimeStamp"]),
);
echo json_encode($jsonResponse);
} else {
//no update: must respond with a status code of 204 No Content.
http_response_code(204);
}
exit();
// End of execution.
// Everything bellow here are function declarations.
function getLatestVersionInfo($clientAppVersion, $clientCpuArchitecture): array {
// override path if client requests an arm64 build
if ($clientCpuArchitecture === 'arm64') {
$directory = "../../updates/darwin/arm64/";
$baseUrl = "https://my-server.com/updates/darwin/arm64/";
} else if (!$clientCpuArchitecture || $clientCpuArchitecture === 'x64') {
$directory = "../../updates/darwin/";
$baseUrl = "https://my-server.com/updates/darwin/";
}
// default name with version 0.0.0 avoids failing
$latestVersionFileName = "Finance D - Tenue de livres-darwin-x64-0.0.0.zip";
$arrayOfFiles = scandir($directory);
foreach ($arrayOfFiles as $file) {
if (is_file($directory . $file)) {
$serverFileVersion = getVersionNumberFromFileName($file);
if (isVersionNumberGreater($serverFileVersion, $clientAppVersion)) {
$latestVersionFileName = $file;
}
}
}
return array(
"versionNumber" => getVersionNumberFromFileName($latestVersionFileName),
"directZipFileURL" => $baseUrl . rawurlencode($latestVersionFileName),
"createdAtUnixTimeStamp" => filemtime(realpath($directory . $latestVersionFileName))
);
}
function isUpdateAvailable($clientVersion, $serverVersion): bool {
return
isValidVersionString($clientVersion) &&
isValidVersionString($serverVersion) &&
isVersionNumberGreater($serverVersion, $clientVersion);
}
function getVersionNumberFromFileName($fileName) {
// extract the version number with regEx replacement
return preg_replace("/Finance D - Tenue de livres-darwin-(x64|arm64)-|\.zip/", "", $fileName);
}
function removeAllNonDigits($semanticVersionString) {
// use regex replacement to keep only numeric values in the semantic version string
return preg_replace("/\D+/", "", $semanticVersionString);
}
function isVersionNumberGreater($serverFileVersion, $clientFileVersion): bool {
// receives two semantic versions (1.0.4) and compares their numeric value (104)
// true when server version is greater than client version (105 > 104)
return removeAllNonDigits($serverFileVersion) > removeAllNonDigits($clientFileVersion);
}
function isValidVersionString($versionString) {
// true when matches semantic version numbering: 0.0.0
return preg_match("/\d\.\d\.\d/", $versionString);
}

WebApi Odata Windows Store App EndSaveChanges exception

I am trying to create a Windows Store App using a WebApi Odata controller. After some effort I have all the Get requests working, I am now moving onto the CRUD methods, and am getting the following Exception on the EndSaveChanges of the Data Service Context.
<?xml version="1.0" encoding="utf-8"?>
<m:error xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata">
<m:code />
<m:message xml:lang="en-US">No HTTP resource was found that matches the request URI 'http://localhost:56317/odata/ESFClients(guid'f04ad636-f896-4de4-816c-388106cd39ce')'.</m:message>
<m:innererror>
<m:message>No routing convention was found to select an action for the OData path with template '~/entityset/key'.</m:message>
<m:type></m:type>
<m:stacktrace></m:stacktrace>
</m:innererror>
</m:error>
Now I think this is a bug in WebApi from this http://aspnetwebstack.codeplex.com/workitem/822 and its hiding the actual error. To make sure it wasn't my Odata Endpoint I created a quick console app to get an entry, update it and Patch it back, which worked all ok. My WebApi Odata Controller derives from ODataController with
public HttpResponseMessage Patch([FromODataUri] Guid key, Delta<ESFClient> patch)
As the method.
In my windows application I have a extension method on the DataServiceContext for the Save Changes.
public static async Task<DataServiceResponse> SaveChangesAsync(this DataServiceContext context, SaveChangesOptions options)
{
var queryTask = Task.Factory.FromAsync<DataServiceResponse>(context.BeginSaveChanges(options, null, null),
queryAsyncResult =>
{
var results = context.EndSaveChanges(queryAsyncResult);
return results;
});
return await queryTask;
}
And calling the update like so from a blank Windows Store XAML page.
public async Task UpdateWeekNo()
{
var container = new ESFOdataService.Container(new Uri("http://localhost:56317/odata/"));
var clients = (DataServiceQuery<ESFClient>)from p in container.ESFClients where p.UserID == new Guid("f04ad636-f896-4de4-816c-388106cd39ce") select p;
var result = await clients.ExecuteAsync();
var updatedClient = result.Single();
if (updatedClient != null)
{
updatedClient.WeekNo = 19;
container.UpdateObject(updatedClient);
await container.SaveChangesAsync(SaveChangesOptions.PatchOnUpdate); // Use PATCH not MERGE.
}
}
So does anyone come across the same issue, or know how I can find out the actual error. One interesting point is that if I debug the controller while running the Windows App, the patch method does not get called.
Ok, so I have finally solved this. Just a recap for those who could experience the same issue. I have an Odata WebApi controller, Windows 8 Store Application using WCF Client Library, with the reference created from Add Service Reference. When trying to update (patch) a record an exception was being thrown at the EndSaveChanges. This is because for some reason Post Tunneling is enabled by default on my context. Setting this to false allowed everything to work.
Context.UsePostTunneling = false;
Context.IgnoreResourceNotFoundException = true;

Google+ insert moment using google-api-dotnet-client

I am trying to write an activity in Google+ using the dotnet-client. The issue is that I can't seem to get the configuration of my client app correctly. According to the Google+ Sign-In configuration and this SO question we need to add the requestvisibleactions parameter. I did that but it did not work. I am using the scope https://www.googleapis.com/auth/plus.login and I even added the scope https://www.googleapis.com/auth/plus.moments.write but the insert still did not work.
This is what my request url looks like:
https://accounts.google.com/ServiceLogin?service=lso&passive=1209600&continue=https://accounts.google.com/o/oauth2/auth?scope%3Dhttps://www.googleapis.com/auth/plus.login%2Bhttps://www.googleapis.com/auth/plus.moments.write%26response_type%3Dcode%26redirect_uri%3Dhttp://localhost/%26state%3D%26requestvisibleactions%3Dhttp://schemas.google.com/AddActivity%26client_id%3D000.apps.googleusercontent.com%26request_visible_actions%3Dhttp://schemas.google.com/AddActivity%26hl%3Den%26from_login%3D1%26as%3D-1fbe06f1c6120f4d&ltmpl=popup&shdf=Cm4LEhF0aGlyZFBhcnR5TG9nb1VybBoADAsSFXRoaXJkUGFydHlEaXNwbGF5TmFtZRoHQ2hpa3V0bwwLEgZkb21haW4aB0NoaWt1dG8MCxIVdGhpcmRQYXJ0eURpc3BsYXlUeXBlGgdERUZBVUxUDBIDbHNvIhTeWybcoJ9pXSeN2t-k8A4SUbfhsygBMhQivAmfNSs_LkjXXZ7bPxilXgjMsQ&scc=1
As you can see from there that there is a request_visible_actions and I even added one that has no underscore in case I got the parameter wrong (requestvisibleactions).
Let me say that my app is being authenticated successfully by the API. I can get the user's profile after being authenticated and it is on the "insert moment" part that my app fails. My insert code:
var body = new Moment();
var target = new ItemScope();
target.Id = referenceId;
target.Image = image;
target.Type = "http://schemas.google.com/AddActivity";
target.Description = description;
target.Name = caption;
body.Target = target;
body.Type = "http://schemas.google.com/AddActivity";
var insert =
new MomentsResource.InsertRequest(
// this is a valid service instance as I am using this to query the user's profile
_plusService,
body,
id,
MomentsResource.Collection.Vault);
Moment result = null;
try
{
result = insert.Fetch();
}
catch (ThreadAbortException)
{
// User was not yet authenticated and is being forwarded to the authorization page.
throw;
}
catch (Google.GoogleApiRequestException requestEx)
{
// here I get a 401 Unauthorized error
}
catch (Exception ex)
{
} `
For the OAuth flow, there are two issues with your request:
request_visible_actions is what is passed to the OAuth v2 server (don't pass requestvisibleactions)
plus.moments.write is a deprecated scope, you only need to pass in plus.login
Make sure your project references the latest version of the Google+ .NET client library from here:
https://developers.google.com/resources/api-libraries/download/stable/plus/v1/csharp
I have created a project on GitHub showing a full server-side flow here:
https://github.com/gguuss/gplus_csharp_ssflow
As Brettj said, you should be using the Google+ Sign-in Button as demonstrated in the latest Google+ samples from here:
https://github.com/googleplus/gplus-quickstart-csharp
First, ensure you are requesting all of the activity types you're writing. You will know this is working because the authorization dialog will show "Make your app activity available via Google, visible to you and: [...]" below the text that starts with "This app would like to". I know you checked this but I'm 90% sure this is why you are getting the 401 error code. The following markup shows how to render the Google+ Sign-In button requesting access to Add activities.
<div id="gConnect">
<button class="g-signin"
data-scope="https://www.googleapis.com/auth/plus.login"
data-requestvisibleactions="http://schemas.google.com/AddActivity"
data-clientId="YOUR_CLIENT_ID"
data-accesstype="offline"
data-callback="onSignInCallback"
data-theme="dark"
data-cookiepolicy="single_host_origin">
</button>
Assuming you have a PlusService object with the correct activity type set in data-requestvisibleactions, the following code, which you should be able to copy/paste to see it work, concisely demonstrates writing moments using the .NET client and has been tested to work:
Moment body = new Moment();
ItemScope target = new ItemScope();
target.Id = "replacewithuniqueforaddtarget";
target.Image = "http://www.google.com/s2/static/images/GoogleyEyes.png";
target.Type = "";
target.Description = "The description for the activity";
target.Name = "An example of add activity";
body.Target = target;
body.Type = "http://schemas.google.com/AddActivity";
MomentsResource.InsertRequest insert =
new MomentsResource.InsertRequest(
_plusService,
body,
"me",
MomentsResource.Collection.Vault);
Moment wrote = insert.Fetch();
Note, I'm including Google.Apis.Plus.v1.Data for convenience.
Ah it's that simple! Maybe not? I am answering my own question and consequently accept it as the answer (after a few days of course) so others having the same issue may be guided. But I will definitely up-vote Gus' answer for it led me to the fix for my code.
So according to #class answer written above and as explained on his blog the key to successfully creating a moment is adding the request_visible_actions parameter. I did that but my request still failed and it is because I was missing an important thing. You need to add one more parameter and that is the access_type and it should be set to offline. The OAuth request, at a minimum, should look like: https://accounts.google.com/o/oauth2/auth?scope=https://www.googleapis.com/auth/plus.login&response_type=code&redirect_uri=http://localhost/&request_visible_actions=http://schemas.google.com/AddActivity&access_type=offline.
For the complete and correct client code you can get Gus' example here or download the entire dotnet client library including the source and sample and add what I added below. The most important thing that you should remember is modifying your AuthorizationServerDescription for the Google API. Here's my version of the authenticator:
public static OAuth2Authenticator<WebServerClient> CreateAuthenticator(
string clientId, string clientSecret)
{
if (string.IsNullOrWhiteSpace(clientId))
throw new ArgumentException("clientId cannot be empty");
if (string.IsNullOrWhiteSpace(clientSecret))
throw new ArgumentException("clientSecret cannot be empty");
var description = GoogleAuthenticationServer.Description;
var uri = description.AuthorizationEndpoint.AbsoluteUri;
// This is the one that has been documented on Gus' blog site
// and over at Google's (https://developers.google.com/+/web/signin/)
// This is not in the dotnetclient sample by the way
// and you need to understand how OAuth and DNOA works.
// I had this already, see my original post,
// I thought it will make my day.
if (uri.IndexOf("request_visible_actions") < 1)
{
var param = (uri.IndexOf('?') > 0) ? "&" : "?";
description.AuthorizationEndpoint = new Uri(
uri + param +
"request_visible_actions=http://schemas.google.com/AddActivity");
}
// This is what I have been missing!
// They forgot to tell us about this or did I just miss this somewhere?
uri = description.AuthorizationEndpoint.AbsoluteUri;
if (uri.IndexOf("offline") < 1)
{
var param = (uri.IndexOf('?') > 0) ? "&" : "?";
description.AuthorizationEndpoint =
new Uri(uri + param + "access_type=offline");
}
// Register the authenticator.
var provider = new WebServerClient(description)
{
ClientIdentifier = clientId,
ClientSecret = clientSecret,
};
var authenticator =
new OAuth2Authenticator<WebServerClient>(provider, GetAuthorization)
{ NoCaching = true };
return authenticator;
}
Without the access_type=offline my code never worked and it will never work. Now I wonder why? It would be good to have some explanation.

Resources