How to integrate Google Picker with new Google Identity Services JavaScript library - google-api

Because of the known issue described in here (https://developers.google.com/identity/sign-in/web/troubleshooting) I want to update my application to be using the new gsi sign-in that uses less cookies than the previous versions and therefore might have the solution for the mentioned error...
My problem is that there's little to no documentation on how to integrate google picker with the new gsi.
I used to use gapi for some picker-related code like even loading the library gapi.load('picker', () => {}). The migration doc says to replace the apis.google.com/js/api.js with the new gsi url, and a lot of other methods such as googleAuth.signIn or gapi.client.init are now to be deprecated by 2023. But then:
How to load picker without gapi available? Or gapi still needs to be imported but will not contain any sign-in related methods?
How will I pass apiKey and scopes to be able to init googlePicker?
For methods such as GoogleAuth.isSignedIn docs simply states "Remove. A user's current sign-in status on Google is unavailable. Users must be signed-in to Google for consent and sign-in moments." what does that even mean? I need to check if user is signed in in order to not show again the popup every time they want to upload a file from gPicker...
Before, we used to have a access_token on the callback of a reloadAuthResponse or a signIn, now how do we get the token?
Sorry for the question being confusing, I'm very confused with everything. Any input helps, thanks!

I came across https://developers.google.com/identity/oauth2/web/guides/use-token-model through: How to use scoped APIs with (GSI) Google Identity Services
I changed our code to load this script: https://accounts.google.com/gsi/client, and then modified the our "authorize" function (see below) to use window.google.accounts.oauth2.initTokenClient instead of window.gapi.auth2.authorize to get the access_token.
Note that the callback has moved from the second argument of the window.gapi.auth2.authorize function to the callback property of the first argument of the window.google.accounts.oauth2.initTokenClient function.
After calling tokenClient.requestAccessToken() (see below), the callback passed to window.gapi.auth2.authorize is called with an object containing access_token.
const authorize = () =>
- new Promise(res => window.gapi.auth2.authorize({
- client_id: GOOGLE_CLIENT_ID,
- scope: GOOGLE_DRIVE_SCOPE
- }, res));
+ new Promise(res => {
+ const tokenClient = window.google.accounts.oauth2.initTokenClient({
+ client_id: GOOGLE_CLIENT_ID,
+ scope: GOOGLE_DRIVE_SCOPE,
+ callback: res,
+ });
+ tokenClient.requestAccessToken();
+ });
The way access_token is used was not changed:
new window.google.picker.PickerBuilder().setOAuthToken(access_token)

#piannone is correct, adding to their answer:
You'll still need to load 'client' code, as you're using authentication. That means you'll still include https://apis.google.com/js/api.js in your list of scripts. Only don't load 'auth2'. So, while you won't do:
gapi.load('auth2', onAuthApiLoad);
gapi.load('picker', onPickerApiLoad);
you will need to:
gapi.load('client', onAuthApiLoad);
gapi.load('picker', onPickerApiLoad);
(this is instead of directly loading https://accounts.google.com/gsi/client.js I guess.)

Related

Why is NEAR requesting authorization when I call a change method on the contract?

I have a change method on my NEAR AssemblyScript Contract, which I'm calling from my front end (browser) code. The user is already signed in and can read the view methods on the contract. It's only after trying to call the change method that the user is prompted with the NEAR Wallet requesting authorization.
How can I call the change method without receiving the prompt to request permission again?
You can also see some examples here:
basic github.com/Learn-NEAR/starter--near-api-js
advanced github.com/Learn-NEAR/starter--near-api-js
There's basically 2 interfaces here:
// pass the contract name (as you did)
wallet.requestSignIn('dev-1634098284641-40067785396400');
// pass an object with contract and empty list of methods
wallet.requestSignIn({
contractId: 'dev-1634098284641-40067785396400',
methodNames: []
});
// add a list of methods to constrain what you can call
wallet.requestSignIn({
contractId: 'dev-1634098284641-40067785396400',
methodNames: ['write']
});
I'm answering my own question, because I spent some time trying to figure it out.
When I was calling the function requestSignIn() on the wallet, I also had to specify the contract name.
import {
connect,
WalletConnection
} from 'near-api-js';
...
const near = await connect(nearConfig);
const wallet = new WalletConnection(near);
wallet.requestSignIn('dev-xxxx-yyyy'); // Need to pass the correct contract name to be able to call change methods on behalf of the user without requesting permission again

credentials for google knowledge graph

I am trying to use the Google Knowledge graph API. I already have the API key and also use the library instead of the RESTful API.
kgSearch = Kgsearch::KgsearchService.new
response = kgSearch.search_entities(query: query)
I have tried to instantiate the service as below
kgSearch = Kgsearch::KgsearchService.new(api: 'klfkdlfkdlm')
it's rejected because the init expect no arguments.
Any idea, how to add the api_key ??
I try also:
response = kgSearch.search_entities(query: query, api: 'fjfkjfl')
same things
Any ideas?
According to the Ruby Docs for the Google Api Client, key is an instance method where you can assign your api key (http://www.rubydoc.info/github/google/google-api-ruby-client/Google/Apis/KgsearchV1/KgsearchService#key-instance_method).
So I believe you'd do something like the following:
kgSearch = Kgsearch::KgsearchService.new
kgSearch.key = 'your_key_here'
response = kgSearch.search_entities(query: query) # and any other options that are necessary

Direct (and simple!) AJAX upload to AWS S3 from (AngularJS) Single Page App

I know there's been a lot of coverage on upload to AWS S3. However, I've been struggling with this for about 24 hours now and I have not found any answer that fits my situation.
What I'm trying to do
Upload a file to AWS S3 directly from my client to my S3 bucket. The situation is:
It's a Single Page App, so upload request must be in AJAX
My server and my client are not on the same domain
The S3 bucket is of the newest sort (Frankfurt), for which some signature-generating libraries don't work (see below)
Client is in AngularJS
Server is in ExpressJS
What I've tried
Heroku's article on direct upload to S3. Doesn't fit my client/server configuration (plus it really does not fit harmoniously with Angular)
ready-made directives like ng-s3upload. Does not work because their signature-generating algorithm is not accepted by recent s3 buckets.
Manually creating a file upload directive and logic on the client like in this article (using FormData and Angular's $http). It consisted of getting a signed URL from AWS on the server (and that part worked), then AJAX-uploading to that URL. It failed with some mysterious CORS-related message (although I did set a CORS config on Heroku)
It seems I'm facing 2 difficulties: having a file input that works in my Single Page App, and getting AWS's workflow right.
The kind of solution I'm looking for
If possible, I'd like to avoid 'all included' solutions that manage the whole process while hiding of all of the complexity, making it hard to adapt to special cases. I'd much rather have a simple explanation breaking down the flow of data between the various components involved, even if it requires some more plumbing from me.
I finally managed. The key points were:
Let go of Angular's $http, and use native XMLHttpRequest instead.
Use the getSignedUrl feature of AWS's SDK, instead on implementing my own signature-generating workflow like many libraries do.
Set the AWS configuration to use the proper signature version (v4 at the time of writing) and region ('eu-central-1' in the case of Frankfurt).
Here below is a step-by-step guide of what I did; it uses AngularJS and NodeJS on the server, but should be rather easy to adapt to other stacks, especially because it deals with the most pathological cases (SPA on a different domain that the server, with a bucket in a recent - at the time of writing - region).
Workflow summary
The user selects a file in the browser; your JavaScript keeps a reference to it.
the client sends a request to your server to obtain a signed upload URL.
Your server chooses a name for the object to put in the bucket (make sure to avoid name collisions!).
The server obtains a signed URL for your object using the AWS SDK, and sends it back to the client. This involves the object's name and the AWS credentials.
Given the file and the signed URL, the client sends a PUT request directly to your S3 Bucket.
Before you start
Make sure that:
Your server has the AWS SDK
Your server has AWS credentials with proper access rights to your bucket
Your S3 bucket has a proper CORS configuration for your client.
Step 1: setup a SPA-friendly file upload form / widget.
All that matters is to have a workflow that eventually gives you programmatic access to a File object - without uploading it.
In my case, I used the ng-file-select and ng-file-drop directives of the excellent angular-file-upload library. But there are other ways of doing it (see this post for example.).
Note that you can access useful information in your file object such as file.name, file.type etc.
Step 2: Get a signed URL for the file on your server
On your server, you can use the AWS SDK to obtain a secure, temporary URL to PUT your file from someplace else (like your frontend).
In NodeJS, I did it this way:
// ---------------------------------
// some initial configuration
var aws = require('aws-sdk');
aws.config.update({
accessKeyId: process.env.AWS_ACCESS_KEY,
secretAccessKey: process.env.AWS_SECRET_KEY,
signatureVersion: 'v4',
region: 'eu-central-1'
});
// ---------------------------------
// now say you want fetch a URL for an object named `objectName`
var s3 = new aws.S3();
var s3_params = {
Bucket: MY_BUCKET_NAME,
Key: objectName,
Expires: 60,
ACL: 'public-read'
};
s3.getSignedUrl('putObject', s3_params, function (err, signedUrl) {
// send signedUrl back to client
// [...]
});
You'll probably want to know the URL to GET your object to (typically if it's an image). To do this, I simply removed the query string from the URL:
var url = require('url');
// ...
var parsedUrl = url.parse(signedUrl);
parsedUrl.search = null;
var objectUrl = url.format(parsedUrl);
Step 3: send the PUT request from the client
Now that your client has your File object and the signed URL, it can send the PUT request to S3. My advice in Angular's case is to just use XMLHttpRequest instead of the $http service:
var signedUrl, file;
// ...
var d_completed = $q.defer(); // since I'm working with Angular, I use $q for asynchronous control flow, but it's not mandatory
var xhr = new XMLHttpRequest();
xhr.file = file; // not necessary if you create scopes like this
xhr.onreadystatechange = function(e) {
if ( 4 == this.readyState ) {
// done uploading! HURRAY!
d_completed.resolve(true);
}
};
xhr.open('PUT', signedUrl, true);
xhr.setRequestHeader("Content-Type","application/octet-stream");
xhr.send(file);
Acknowledgements
I would like to thank emil10001 and Will Webberley, whose publications were very valuable to me for this issue.
You can use the ng-file-upload $upload.http method in conjunction with the aws-sdk getSignedUrl to accomplish this. After you get the signedUrl back from your server, this is the client code:
var fileReader = new FileReader();
fileReader.readAsArrayBuffer(file);
fileReader.onload = function(e) {
$upload.http({
method: 'PUT',
headers: {'Content-Type': file.type != '' ? file.type : 'application/octet-stream'},
url: signedUrl,
data: e.target.result
}).progress(function (evt) {
var progressPercentage = parseInt(100.0 * evt.loaded / evt.total);
console.log('progress: ' + progressPercentage + '% ' + file.name);
}).success(function (data, status, headers, config) {
console.log('file ' + file.name + 'uploaded. Response: ' + data);
});
To do multipart uploads, or those larger than 5 GB, this process gets a bit more complicated, as each part needs its own signature. Conveniently, there is a JS library for that:
https://github.com/TTLabs/EvaporateJS
via https://github.com/aws/aws-sdk-js/issues/468
Use s3-file-upload open source directive having dynamic data-binding and auto-callback functions - https://github.com/vinayvnvv/s3FileUpload

Google Spreadsheet API - returns remote 500 error

Has anyone battled 500 errors with the Google spreadsheet API for google domains?
I have copied the code in this post (2-legged OAuth): http://code.google.com/p/google-gdata/source/browse/trunk/clients/cs/samples/OAuth/Program.cs, substituted in my domain;s API id and secret and my own credentials, and it works.
So it appears my domain setup is fine (at least for the contacts/calendar apis).
However swapping the code out for a new Spreadsheet service / query instead, it reverts to type: remote server returned an internal server error (500).
var ssq = new SpreadsheetQuery();
ssq.Uri = new OAuthUri("https://spreadsheets.google.com/feeds/spreadsheets/private/full", "me", "mydomain.com");
ssq.OAuthRequestorId = "me#mydomain.com"; // can do this instead of using OAuthUri for queries
var feed = ssservice.Query(ssq); //boom 500
Console.WriteLine("ss:" + feed.Entries.Count);
I are befuddled
I had to make sure to use the "correct" class:
not
//using SpreadsheetQuery = Google.GData.Spreadsheets.SpreadsheetQuery;
but
using SpreadsheetQuery = Google.GData.Documents.SpreadsheetQuery;
stinky-malinky
Seems you need the gdocs api to query for spreadsheets, but the spreadsheet api to query inside of a spreadsheet but nowhere on the internet until now will you find this undeniably important tit-bit. Google sucks hard on that one.

Simple Claims Transformation for an RP-STS in Geneva Framework

After reading the MSDN article (http://msdn.microsoft.com/en-us/magazine/2009.01.genevests.aspx) on implementing a Custom STS using the Microsoft Geneva Framework I am a bit puzzled about one of the scenarios covered there. This scenario is shown in figure 13 of the above referenced article.
My questions are around how does the RP initiate the call to the RP-STS in order to pass on the already obtained claims from the IP-STS? How does the desired method DeleteOrder() get turned into a Claim Request for the Action claim from the RP-STS which responds with the Action claim with a value Delete which authorizes the call? I also think the figure is slightly incorrect in that the interaction between the RP-STS and the Policy Engine should have the Claims and arrows the other way around.
I can see the structure but it's not clear what is provided by Geneva/WCF and what has to be done in code inside the RP, which would seem a bit odd since we could not protect the DeleteOrder method with a PrincipalPermission demand for the Delete "permission" but would have to demand a Role first then obtain the fine-grained claim of the Delete Action after that point.
If I have missed the point (since I cannot find this case covered easily on the Web), then apologies!
Thanks in advance.
I asked the same question on the Geneva Forum at
http://social.msdn.microsoft.com/Forums/en/Geneva/thread/d10c556c-1ec5-409d-8c25-bee2933e85ea?prof=required
and got this reply:
Hi Dokie,
I wondered this same thing when I read that article. As I've pondered how such a scenario would be implemented, I've come up w/ two ideas:
The RP is actually configured to require claims from the RP-STS; the RP-STS requires a security token from the IP-STS. As a result, when the subject requests a resource of the RP, it bounces him to the RP-STS who bounces him to the IP-STS. After authenticating there, he is bounced back to the RP-STS, the identity-centric claims are transformed into those necessary to make an authorization decision and returned to the RP.
The RP is configured to have an interceptor (e.g., an AuthorizationPolicy if it's a WCF service) that grabs the call, sees the identity-centric claims, creates an RST (using the WSTrustClient), passes it to the RP-STS, that service expands the claims into new one that are returned to the RP, and the RP makes an authorization decision.
I've never implemented this, but, if I were going to, I would explore those two ideas further.
HTH!
Regards,
Travis Spencer
So I will try option 2 first and see if that works out then formulate an answer here.
I've got situation one working fine.
In my case AD FS is the Identity Service and a custom STS the Resource STS.
All webapp's use the same Resource STS, but after a user visits an other application the Identity releated claims are not addad again by the AD FS since the user is already authenticated. How can I force or request the basic claims from the AD FS again?
I've created a call to the AD FS with ActAs, now it returns my identification claims.
Remember to enable a Delegation allowed rule for the credentials used to call the AD FS.
string stsEndpoint = "https://<ADFS>/adfs/services/trust/2005/usernamemixed";
var trustChannelFactory = new WSTrustChannelFactory(new UserNameWSTrustBinding(SecurityMode.TransportWithMessageCredential), stsEndpoint);
trustChannelFactory.Credentials.UserName.UserName = #"DELEGATE";
trustChannelFactory.Credentials.UserName.Password = #"PASSWORD";
trustChannelFactory.TrustVersion = TrustVersion.WSTrustFeb2005;
//// Prepare the RST.
//var trustChannelFactory = new WSTrustChannelFactory(tokenParameters.IssuerBinding, tokenParameters.IssuerAddress);
var trustChannel = (WSTrustChannel)trustChannelFactory.CreateChannel();
var rst = new RequestSecurityToken(RequestTypes.Issue);
rst.AppliesTo = new EndpointAddress(#"https:<RPADDRESS>");
// If you're doing delegation, set the ActAs value.
var principal = Thread.CurrentPrincipal as IClaimsPrincipal;
var bootstrapToken = principal.Identities[0].BootstrapToken;
// The bootstraptoken is the token received from the AD FS after succesfull authentication, this can be reused to call the AD FS the the users credentials
if (bootstrapToken == null)
{
throw new Exception("Bootstraptoken is empty, make sure SaveBootstrapTokens = true at the RP");
}
rst.ActAs = new SecurityTokenElement(bootstrapToken);
// Beware, this mode make's sure that there is no certficiate needed for the RP -> AD FS communication
rst.KeyType = KeyTypes.Bearer;
// Disable the need for AD FS to crypt the data to R-STS
Scope.SymmetricKeyEncryptionRequired = false;
// Here's where you can look up claims requirements dynamically.
rst.Claims.Add(new RequestClaim(ClaimTypes.Name));
rst.Claims.Add(new RequestClaim(ClaimTypes.PrimarySid));
// Get the token and attach it to the channel before making a request.
RequestSecurityTokenResponse rstr = null;
var issuedToken = trustChannel.Issue(rst, out rstr);
var claims = GetClaimsFromToken((GenericXmlSecurityToken)issuedToken);
private static ClaimCollection GetClaimsFromToken(GenericXmlSecurityToken genericToken)
{
var handlers = FederatedAuthentication.ServiceConfiguration.SecurityTokenHandlers;
var token = handlers.ReadToken(new XmlTextReader(new StringReader(genericToken.TokenXml.OuterXml)));
return handlers.ValidateToken(token).First().Claims;
}

Resources