Epic FHR Integrations: Moving from Sandbox to Prod - hl7-fhir

I've used SMART on FHIR to successfully pull test patient data from Epic's sandbox for a patient-facing app (it's a standalone launch). I'm trying now to pull real patient data from a health system but I keep getting the error when trying to authorize my app: "OAuth2 Error. Something went wrong trying to authorize the client. Please try logging in again."
When I was testing with sandbox data, I used this code as reference and then modified it to work for React. This is code I used to authorize my app:
function pullEpicData() {
FHIR.oauth2.authorize({
'client_id': {Non-Prod Client ID given by Epic},
'scope': 'PATIENT.READ, PATIENT.SEARCH',
'redirect_uri': {my website},
'iss': 'https://fhir.epic.com/interconnect-fhir-oauth/api/FHIR/R4/'
})
}
This worked fine.
When I switched to prod mode, I used the following code to try to authorize my app:
function pullEpicData() {
FHIR.oauth2.authorize({
'client_id': {Prod Client ID given by Epic},
'scope': 'PATIENT.READ, PATIENT.SEARCH',
'redirect_uri': {my website},
'iss': 'https://sfd.stanfordmed.org/FHIR/api/FHIR/R4/'
})
}
However, this authorization keeps failing.
I didn't make any other changes to my code. Is there anything else I should be doing when switching from sandbox to prod to make the authorization work properly? I'm not using refresh tokens at the moment. Thanks!

There are two very common causes of this issue:
Your client ID does not qualify for auto-sync.
You didn't wait the ~12 hours for your client ID to sync.
For auto-sync, when you register a client ID, the APIs you select may disqualify you for auto-sync. If you don't qualify for auto-sync, then the healthcare organization you want to connect to just explicitly approve your app before it can be used to connect to their endpoints. There is an indicator near the bottom of the client registration form that indicates if you qualify for auto-sync or not.
Regardless of whether your app qualifies for auto-sync, or was explicitly approved by a health system, any changes to a client can take up to ~12 hours to sync (there is a job that runs every ~12 hours that downloads updates).
Other common OAuth2 connection issues are documented in our Troubleshooting Guide (requires login, but you can signup for an account for free).

Related

Laravel authentication between different back-end project

I have two or more back-end API(Laravel) projects and a single front-end React JS project. From the front-end app, I will call all of the back-end API projects.
When the user login, authentication will check in App 1(with Laravel passport) and return access_token.
I want to use these access_token when calling API from both App 1 and App 2. But, the main problem is how to check access_token validation from App 2 to App 1 server.
To solve this problem, I think but not sure it is the correct way or not, I can create middleware in the App 2 server and get every incoming access_token and send it to check validation to App 1. If return true, user can access, else can't access.
But, I think this way is inappropriate because every incoming request needs to check access_token validation from App 2 to App 1, it will slow down the server and bottleneck problem.
I already search a lot of posts on google but, not yet find the best way for me. I found one way OAuth server implementation https://www.youtube.com/watch?v=K7RfBgoeg48 but, I think that way is not working well in my project structures because I have a lot of customization.
I'm also read the discussion on reddit(https://www.reddit.com/r/laravel/comments/dqve4z/same_login_across_multiple_laravel_instances/) but, I still didn't understand very well.
You have several options here:
I expect you have a database containing all your access and refresh tokens for your users - so just create a database access from the App2 backend server to the database containing your access and refresh tokens and just check them directly in the App2 via the new database connection.
Create the middleware that will check user authentication from App2 to App1, but as you correctly pointed out, that would cause an extra loading time.
Depending on whether you need the end user to know that he's connecting to "another server" - meaning App2 - you can use Oauth2 authorization - https://www.youtube.com/watch?v=zUG6BHgJR9w
Option 1. seems like the best solution to me

The requested application with ID xxxxxx was not found

I 'm trying to post/get score from google play leaderboards
I met all the meta-tag from the documentation including my client id
<meta name="google-signin-client_id" content="XXXXXX-YYYYYYYYYYY.apps.googleusercontent.com" />
I also have setup the google sign-in system and all is fine, however when I try to call the leaderboards API I get the message error: The requested application with ID xxxxxx was not found
I am calling the API like the mentioned in the doc
gapi.client.request({
path: '/games/v1/leaderboards/LEADERBOARD-ID',
params: { maxResults: 3 },
callback: function(response) {
console.log(response);
}
});
I am not sure if the problem is the missing argument to execute a request.
Try to use this API requests method.
gapi.client.Request
An object encapsulating an HTTP request. This object is not
instantiated directly, rather it is returned by gapi.client.request.
There are two ways to execute a request. We recommend that you treat
the object as a promise and use the then method, but you can also use
the execute method and pass in a callback.
You can refer to this Github post for additional reference.
This message: W/AchievementAgent( 3558):
{"code":404,"errors":[{"message":"The requested application with ID
571707973781 was not found.","domain":"global","reason":"notFound "}]}
is a little cryptic but points to a mismatch with the auth
configuration on the console and the application.
You'll want to double check the keystore SHA1 fingerprint of the
keystore you signed the app with and the one configured in the dev
console.
It could also be the bundle ID, but that is hard to mess up since it
is part of the resource data used when running Setup for the plugin.
Also, it could be that the player is not a tester for this game.
For anyone having the same issue, you need to publish the beta version of the game to be able to interact with the game scoreboard.
Note: In the beta version, only tester accounts added to the game can access the scoreboard.

sails.js session variables getting lost when API is accessed from android

I have created a REST API for an android application. There are certain req.session variables that I set at certain points and use them in the policies for further steps. Everything works fine when I access the API from a REST client like POSTMAN.
However, when it is accessed from a native android app, the req.session values that I set in one step are lost in the next step.
Any idea why this might be happening and what might be the workaround ?
Session does not work with request sent from untrusted client (in this case the Android device).
You should consider using the OAuth strategy to accomplish your target. It's a bit complicated.
Or just simply generate an accessToken for each successful login then return it to the client. For further requests, the client must attach this accessToken (usually to the header) of the requests.
This is a good SO question for the same issue: How to implement a secure REST API with node.js

Correct way to use a Google Apps Marketplace service account to connect to Gmail IMAP and other services

One of the features of our Marketplace app makes use of accessing the user's Gmail account via IMAP. We are using the google-api-java-client and google-oauth-java-client libraries and code similar to this example in the java-gmail-imap project as follows:
GoogleCredential credential = new GoogleCredential.Builder().setTransport(HTTP_TRANSPORT)
.setJsonFactory(JSON_FACTORY)
.setServiceAccountId(SERVICE_ACCOUNT_ID)
.setServiceAccountScopes(Arrays.asList(GMAIL_SCOPE))
.setServiceAccountPrivateKey(PRIVATE_KEY)
.setServiceAccountUser(emailAddress)
.build();
credential.refreshToken();
We are then using code based on the examples at https://code.google.com/p/google-mail-oauth2-tools to make the IMAP connection e.g.
IMAPStore imapStore = OAuth2Authenticator.connectToImap("imap.googlemail.com",
993, emailAddress, credential.getAccessToken(), false);
The majority of the time this appears to work correctly, however we are seeing that for a small but significant number of requests the call to Google made by refreshToken() fails with an HTTP 500 error and an HTML response where the JSON would normally be returned e.g.
<p class="large"><b>500.</b> <ins>That's an error.</ins></p>
<p class="large">The server could not process your request.
<ins>That's all we know.</ins></p>
We were advised by a developer advocate at Google that we refresh tokens are not supported for service accounts and we should be using an approach like in this example.
However, it seems like without the call to refreshToken then accessToken is not populated on the credential object and then this results in a NullPointerException when we call OAuth2Authenticator.connectToImap
From the source for GoogleCredential it did seem like executeRefreshToken() is overridden to handle service accounts i.e. instead of performing a refresh it simply requests a new token, and then this bit of code in Credential then handles populating the access token:
TokenResponse tokenResponse = executeRefreshToken();
if (tokenResponse != null) {
setFromTokenResponse(tokenResponse); ....
We were unsure whether we need to enclose our call to refreshToken() in a retry loop to work around the intermittent 500 errors or whether we need to make other changes to our code to follow the recommended approach for this scenario.
Can anyone advise?
I use the java-gmail-imap example code in production (but it is only used to display an inbox in our University portal, there isn't much interaction that would require me to reuse the same refresh token for instance).
Depending on your usage, I wonder if in your case some kind of throttling is coming into play (I've read in places that Gmail can occasionally throttle access).
Elsewhere I've seen Google APIs talk about making retries using an exponential backoff algorithm.
You have to be a little careful when comparing the usage of OAuth 2.0 with the other Google Service APIs and Gmail. Gmail is special in that it uses XOAUTH2. That said I've seen other Google API's that appear to need the refreshToken call. The documentation is a bit unclear and says things like "Refresh the access token, if necessary" (as you say it doesn't seem to work without this step but I haven't done any experimentation with re-using refresh tokens via credential.setRefreshToken(String refreshToken)).
I'd be interested to hear how you get on.

Creating a local Token cache using the Geneva Framework

Haven't seen many Geneva related questions yet, I have posted this question in the Geneva Forum as well...
I'm working on a scenario where we have a win forms app with a wide installbase, which will be issuing frequent calls to various services hosted by us centrally throughout it's operation.
The services are all using the Geneva Framework and all clients are expected to call our STS first to be issued with a token to allow access to the services.
Out of the box, using the ws2007FederationHttpBinding, the app can be configured to retrieve a token from the STS before each service call, but obviously this is not the most efficient way as we're almost duplicating the effort of calling the services.
Alternatively, I have implemented the code required to retrieve the token "manually" from the app, and then pass the same pre-retrieved token when calling operations on the services (based on the WSTrustClient sample and helpon the forum); that works well and so we do have a solution,but I believeit's not very elegant as it requires building the WCF channel in code, moving away from the wonderful WCF configuration.
I much prefer the ws2007FederationHttpBinding approach where by the client simply calls the service like any other WCF service, without knowing anything about Geneva, and the bindings takes care of the token exchange.
Then someone (Jon Simpson) gave me [what I think is] a great idea - add a service, hosted in the app itself to cache locally retrieved tokens.
The local cache service would implement the same contract as the STS; when receiveing a request it would check to see if a cahced token exists, and if so would return it, otherwise it would call the 'real' STS, retrive a new token, cache it and return it.
The client app could then still use ws2007FederationHttpBinding, but instead of having the STS as the issuer it would have the local cache;
This way I think we can achieve the best of both worlds - caching of tokens without the service-sepcific custom code; our cache should be able to handle tokens for all RPs.
I have created a very simple prototype to see if it works, and - somewhat not surprising unfortunately - I am slightly stuck -
My local service (currently a console app) gets the request, and - first time around - calls the STS to retrieve the token, caches it and succesfully returns it to the client which, subsequently, uses it to call the RP. all works well.
Second time around, however, my local cahce service tries to use the same token again, but the client side fails with a MessageSecurityException -
"Security processor was unable to find a security header in the message. This might be because the message is an unsecured fault or because there is a binding mismatch between the communicating parties. This can occur if the service is configured for security and the client is not using security."
Is there something preventing the same token to be used more than once? I doubt it because when I reused the token as per the WSTrustClient sample it worked well; what am I missing? is my idea possible? a good one?
Here's the (very basic, at this stage) main code bits of the local cache -
static LocalTokenCache.STS.Trust13IssueResponse cachedResponse = null;
public LocalTokenCache.STS.Trust13IssueResponse Trust13Issue(LocalTokenCache.STS.Trust13IssueRequest request)
{
if (TokenCache.cachedResponse == null)
{
Console.WriteLine("cached token not found, calling STS");
//create proxy for real STS
STS.WSTrust13SyncClient sts = new LocalTokenCache.STS.WSTrust13SyncClient();
//set credentials for sts
sts.ClientCredentials.UserName.UserName = "Yossi";
sts.ClientCredentials.UserName.Password = "p#ssw0rd";
//call issue on real sts
STS.RequestSecurityTokenResponseCollectionType stsResponse = sts.Trust13Issue(request.RequestSecurityToken);
//create result object - this is a container type for the response returned and is what we need to return;
TokenCache.cachedResponse = new LocalTokenCache.STS.Trust13IssueResponse();
//assign sts response to return value...
TokenCache.cachedResponse.RequestSecurityTokenResponseCollection = stsResponse;
}
else
{
}
//...and reutn
return TokenCache.cachedResponse;
This is almost embarrassing, but thanks to Dominick Baier on the forum I no now realise I've missed a huge point (I knew it didn't make sense! honestly! :-) ) -
A token gets retrieved once per service proxy, assuming it hadn't expired, and so all I needed to do is to reuse the same proxy, which I planned to do anyway, but, rather stupidly, didn't on my prototype.
In addition - I found a very interesting sample on the MSDN WCF samples - Durable Issued Token Provider, which, if I understand it correctly, uses a custom endpoint behaviour on the client side to implement token caching, which is very elegant.
I will still look at this approach as we have several services and so we could achieve even more efficiency by re-using the same token between their proxies.
So - two solutions, pretty much infornt of my eyes; hope my stupidity helps someone at some point!
I've provided a complete sample for caching the token here: http://blogs.technet.com/b/meamcs/archive/2011/11/20/caching-sts-security-token-with-an-active-web-client.aspx

Resources