I am looking to develop a FHIR based/derived API to support cross system/organisation functions. Primarily for the purposes of audit, and to a lesser degree access control, I would like the initiating system to include** information that describes the origin of the call. This includes:
Source user
Source organisation
Source application (i.e. application name and version)
The first two items can easily be represented with Practitioner however I can't find any resources that could be used to carry source application.
Does anybody know of any resources that could be used for this purpose? Where possible I want to avoid using extensions.
** This is actually being included in an oAuth Client Credentials flow where the calling system obtains a token by providing a system level shared secret and origin. This token is then supplied to the FHIR API method.
Answering my own question
The Device resource covers what I need:
http://hl7.org/fhir/device.html
Related
I am writing a server side python script with Pydrive which needs to store a file in a specific gdrive. Pydrive and this post suggest to use a service account.
However this would mean that with the credentials of this service account all gdrives are accessible and I would rather avoid that.
Ideal only one specific gdrive or all gdrives where one specific user has access to should be accessible.
Is it possible to give programmatically access to only one specific gdrive?
[Edit]
As mentioned in the comments I am apparently not looking for a OAuth flow.
I am looking for a server-to-server communication for accessing one specific google drive using the principle of least privilege access. Doing this with a service account + domain wide delegate and google drive r/w scope would mean that with this service account all google drives can be accessed which is not what I want.
Unfortunately there is a domain wide policy in place which forbids to share google drives to "other" domains. This means I can not use a service account without domain wide delegation and just share the drive with it.
I don't understand what you mean by "programmatically", when you already tag the question as oAuth - asking for oAuth2 flow, which is interactive. When there is nobody, who would press the buttons, this probably isn't the authentication flow you're looking for. Just share a directory with a service-account; no domain-wide delegation is required (with that enabled, there would be no need to share it).
One could even abstract the whole Drive API access credentials away by using a simple Cloud Function, which has to task to update one file; triggered through HTTP, utilizing the Drive API.
Possible approach - dummy account
You could designate a new account that will be your "service account". In reality it won't be an actual service account, it will just be a dummy account that you can call something like "gdrivebot#yourdomain.com". Then you can share only what is absolutely necessary with it. I think this would be the only way to get that level of fine-grained control that you are looking for. This would require your admin to designate a new account just for this purpose though.
I am using the Google Client Library for Java SDK in my Android app to interface with Google Drive.
Do Google act as a Data Controller or Data Processor by using this SDK? I need to know if I need to store any data to show the user has consented to my app interfacing with Google Drive in line with GDPR.
I know I need to ask permission for personalised or non-personalised ads but the Google Drive SDK and GDPR stuff is driving me crazy.
Thanks
Disclaimer I am not a legal type person this is my opinion from the guidelines that we have been given. You should also seek independent legal advice relating to your status and obligations under the GDPR, as only a lawyer can provide you with legal advice specifcally tailored to your situation.
For refrence I am going to quote from the following documents which as of my writing are the only thing Google has released with regard to GDPR that i am aware of ath this time
Google Cloud & the General Data Protection Regulation
GOOGLE CLOUD & THE GDPR WHITEPAPER
Google Cloud & the General Data Protection Regulation (GDPR)
G Suite1
and Google Cloud Platform customers will typically act as
the data controller for any personal data they provide to Google in
connection with their use of Google’s services. The data controller
determines the purposes and means of processing personal data,
while the data processor processes data on behalf of the data
controller. Google is a data processor and processes personal data
on behalf of the data controller when the controller is using G Suite
or Google Cloud Platform.
Data controllers are responsible for implementing appropriate
technical and organisational measures to ensure and demonstrate
that any data processing is performed in compliance with the GDPR.
Controllers’ obligations relate to principles such as lawfulness,
fairness and transparency, purpose limitation, data minimisation,
and accuracy, as well as fulfilling data subjects’ rights with respect
to their data.
If you are a data controller, you may find guidance related to your
responsibilities under GDPR by regularly checking the website of
your national or lead data protection authority under the GDPR (as
applicable)2, as well as by reviewing publications by data privacy
associations such as the International Association of Privacy
Professionals (IAPP).
You should also seek independent legal advice relating to your status
and obligations under the GDPR, as only a lawyer can provide you with
legal advice specifcally tailored to your situation. Please bear in mind
that nothing on this website is intended to provide you with, or should
be used as a substitute for legal advice.
Gsuite is Googles sweet of tools that being Drive, Calendar ... they are the data controller for the data behind the Google tools.
Controller vs. Processor
(7) ‘controller’ means the natural or legal person, public authority, agency or other body which, alone or jointly with others, determines the purposes and means of the processing of personal data; where the purposes and means of such processing are determined by Union or Member State law, the controller or the specific criteria for its nomination may be provided for by Union or Member State law;
(8) ‘processor’ means a natural or legal person, public authority, agency or other body which processes personal data on behalf of the controller;
IMO
If you are accessing a users data on Google Drive and changing it or doing anything with it then yes you are going to need to tell them what you are using their data for and log their consent. If you are saving their data anywhere then you are also going to have to give them the ability to delete that data.
There are some things you cant do for example if they want to delete all their files on drive thats not your responsibility that's Googles. You are only responsible for the data thats on your system and what you have done with it.
Using googles client library IMO doesn't have much to do with GDPR its what you are doing with the data that they return that matters. I did contact google a few months ago hoping to get some official guidelines with regard to GDPR and the client libraries. I have not heard anything as of yet.
I have standard enterprise level WEB Api , which is going to be used across the organisation. I can have demarcation like Mainframe Systems/Online channel/etc through API key.
However there can be many systems( say many systems in online) in WEBAPI that can share the same key. I need to identify each and every call uniquely and if it is not being shared
any ways for it?
You could just log the IP address of each request. If all the systems are internal to your organisation then it shouldn't be hard to match IPs to what system it is.
Edit
Or you could pass a client Id as well as the API Key.
I'm currently building a RESTful API to our web service, which will be accessed by 3rd party web and mobile apps. We want to have certain level of control over API consumers (i.e. those web and mobile apps), so we can do API requests throttling and/or block certain malicious clients. For that purpose we want every developer who will be accessing our API to obtain an API key from us and use it to access our API endpoints. For some API calls that are not dealing with the specific user information, that's the only required level of authentication & authorization, which I call "app"-level A&A. However, some API calls deal with information belonging to the specific users, so we need a way to allow those users to login and authorize the app to access their data, which creates a second level (or "user"-level A&A).
It makes a lot of sense to use OAuth2 for the "user"-level A&A and I think I have a pretty good understanding of what I need to do here.
I also implemented OAuth1-like scheme, where app developers receive a pair of API key & secret, supply their API key with every call and use secret to sign their requests (again, it's very OAuth1 like and I should probably just use OAuth1 for that).
Now the problem that I have is how to marry those two different mechanisms. My current hypothesis is that I continue to use API key/secret pair to sign all requests to be able to access all API endpoints and for those calls that require access to user-specific information apps will need to go through OAuth2 flow and obtain access tokens and supply them.
So, my question to the community is - does it sounds like a good solution or there are some better ways to architect this.
I'd also appreciate any links to existing solutions that I could use, instead of re-inventing the wheel (our services is Ruby/Rails-based).
Your key/secret pair isn't really giving you any confidence in the authorship of mobile apps. The secret will be embedded in the executable, then given to users, and there's really nothing you can do to prevent the user from extracting the key.
In the Stack Exchange API, we just use OAuth 2.0 and accept that all we can do is cutoff abusive users (or IPs, in earlier revisions without OAuth). We do provide keys for tracking purposes, but they're not secret (and grant nothing of value, so there's no incentive to steal them).
In terms of preventing abuse, what we do is throttle based on IP in the absence of an auth token, but switch to a per-user throttle when there is one.
When dealing with purely malicious clients, we unleash the lawyers (malicious in our case is almost always violation of cc-wiki guidelines); technical solutions aren't sophisticated enough in our estimation. Note that the incidence of malicious clients is really really low (single digits in years of operation, with millions of daily API requests).
In short, I'd ditch OAuth 1.0 and switch your throttles to a hybrid of IP and user based.
I have a desktop application which uses flat files (some xml and small pictures) as data. I want this data to be available on other PCs which have the desktop application installed and usable by a smartphone client (WP7 at the moment) as well.
The user should have it very easy to synchronize this data. He should be able to use accounts he already possesses (Live-Login, Googlemail, Facebook,...).
I thought about using Azure Blob Storage to save the data in Azure, the Sync Framework to perform the actual synchronization and the Access Control Service to handle authentication.
I have not used any of this technologies before so any advice would be great but I'm searching foremost for errors or shortcomings in this strategy I don't see yet. Is this approach viable at all?
Windows Azure is basically a virtualized datacentre. It is elaborate and complicated and is pitched at corporations who don't want to own their server infrastructure or hardware.
If I understand correctly, what you want is a cloud fileserver, not a whole LAN. Windows SkyDrive fulfils this requirement nicely and offers 25GB of storage per member with no charge for membership.
About Hotmail and Windows Live People often confuse Hotmail and
Windows Live, because when you set up a Hotmail account it uses
Windows Live for authentication and therefore you end up with a
Windows Live account and all the associated facilities, including
SkyDrive. However, it is entirely possible to set up a Windows Live
account using any email address as the username.
If you do this, it is important to be aware that the Windows Live
password associated with a given email address is completely
independent of the password required by the mail server that hosts
mail for the account. This can cause a great deal of user confusion.
For Hotmail (or any other mail server that uses Windows Live for
authentication) they are guaranteed to be the same password.
There is no official Microsoft framework support for SkyDrive. There is an open source project called SkyDriveApiClient, but it only works with the full .NET framework. I tried porting it but the author was a bit of an architecture astronaut, and it is absolutely riddled with [Serializable] which is not available on WP7x.
The WP7 guys have said that the WP7 framework will probably include support for SkyDrive but not in Mango (WP7.1) and given that Microsoft's typical release cycle is 18 months and Mango has yet to hit the streets, I'd say it will be two years before you can count on intrinsic cloud file services for WP7.
Roll-your-own wouldn't be hard, WCF services are dead easy to use from WP7. But that's not really cloud since you have to provide and maintain the server infrastructure yourself. For this reason and given the MS timetable, I have put a great deal of effort into producing my own SkyDrive client for WP7. Core functionality is complete and I am now refactoring, improving robustness and adding performance enhancements like local cacheing of tokens (cookies, essentially). I don't intend to release it; I have a number of apps planned that depend on this functionality and it suits me fine that there is a substantial barrier to competition.
I didn't tell you that to tease you. My point is that I'm so sure SkyDrive is the right answer that I put a lot of work into making it happen.
Cloud file storage is a perfect fit for mobile devices.
Azure is not a good answer for the sort of phone apps individuals want because the data store isn't shared in a way that required indexing or supports high levels of concurrency
I can certainly think of corporate phone apps that would benefit from using SQL Server as storage
Azure can do file services but it represents an ongoing expense. Nobody's going to put up with that when Google and Microsoft both give away web based cloud storage.
I can personally attest that if you're determined, it is possible to use SkyDrive from WP7.
Cloud storage is the only way you're going to get programmatically accessible storage that's shared by your user's mobile device and his computer. One of the things I intend to do that depends on shared storage is write a Silverlight app that lets you prepare map routes with multiple waypoints on a desktop computer and a companion app that uses them on WP7.
The Windows Live team has released what they call support for WP7. They supply a sample project showing you how to instantiate a browser object and load their login pages and manipulate them to log in and use their javascript API to manipulate SkyDrive.
This has one big advantage: browser cookies and cached credentials. The disadvantages are obvious; technical shortcomings notwithstanding the Windows Live team seems to think the only thing people want to do with a phone is tag their photos and fiddle with social media.
I have finished my own libraries. They do not support most of the social media twaddle. I have treated SkyDrive as no more or less than a cloud file system, providing
Authenticate(username, password)
CreateFolder(folderpath[, blocking=false])
Delete(fileOrFolderPath[, blocking=false])
SaveString(filepath, value[, blocking=false])
LoadString(filepath)
I could handle binaries but Convert.ToBase64 makes this unnecessary and strings are convenient for XML. CreateFolder, Delete and SaveString are optionally blocking. LoadString is always blocking because it's a function that returns the loaded string. CreateFolder is recursive so you can create an entire path in one call (eg /folder1/folder2/folder3). Calling CreateFolder on a pre-existing path has no effect, and SaveString uses CreateFolder to ensure the path is valid, making it unnecessary to create a filepath in advance. Authenticate loads the file system (except file content) into memory eliminating server chatter. This is asynchronous and a FileSystemReady event announces when the file system is completely loaded. The model is maintained as you add and remove files and folders.
This was a lot of work and no one reponded to my attempt to make it an open source project so I'm not inclined to give the fruits of my labour away, but provided your plans don't compete with mine I could be persuaded to come to an arrangement.