Is it possible to run apps on Heroku that are HIPAA compliant? More specifically, I need two apps, one that stores member information and another that stores private health information of the members. I intend to encrypt sensitive data using both asymmetric and symmetric key encryption–asymmetric for the keys that link members with their sensitive data on the other app, and symmetric for specific fields in the members app, such as name, email address and phone. My main concern is that anyone at Heroku can break the asymmetric encryption, since they have access to both apps (and private keys). Am I correct to be concerned about this, or does the infrastructure of Amazon EC2 prevent Heroku staff from accessing both apps?
Amazon has a whitepaper on HIPAA compliance with AWS (just google AWS Hipaa compliance) where they talk about their HIPAA bona fides. For example, AWS sysadmins don't have direct login access to customer OS images.
To the best of my knowledge, Heroku has not shared details of how they secure their individual customer accounts.
HIPAA compliance involves a number of different areas, including more than just technology. Specifically regarding the technology requirements within HIPAA, there are a bunch of requirements, but the one that you most obviously can't meet with Heroku is this one:
164.314 Organizational requirements. (B) (B) In accordance with 164.308(b)(2), ensure that any subcontractors that create, receive, maintain, or transmit electronic protected health information on behalf of the business associate agree to comply with the applicable requirements of this subpart by entering into a contract or other arrangement that complies with this section;
You need a BAA from Heroku. HIPAA doesn't distinguish between encrypted and unencrypted data when it defines subcontractors and business associates. For a good sense of all that is required of HIPAA, here's a comprehensive list - https://catalyze.io/hipaa/. Hope that helps.
Heroku has told me they will not sign Business Associate Agreements at the moment, so if you store any PHI on the server it is not possible to be HIPAA compliant.
Heroku has announced their Shield accounts that will provide HIPAA compliance.
From the link
The Shield Private Dyno includes an encrypted ephemeral file system
and restricts SSL termination from using TLS 1.0 which is considered
vulnerable. Shield Private Postgres further guarantees that data is
always encrypted in transit and at rest. Heroku also captures a high
volume of security monitoring events for Shield dynos and databases
which helps meet regulatory requirements without imposing any extra
burden on developers.
That may or may not obviate the need for BAA's, MOU's, etc.
Related
I many articles where writers show how to deploy a Laravel app/website on shared hosting they discourage doing that. In some other quora questions where the answers state that it is possible but has some security risks. So what security risks does this practice implies.
The honest answer depends on the type of project and-or customer (agency vs interprise).
If you are working for a smaller project and there is no on-going development (extra possible invoicing) to it I will recommend using a shared hosting.
But if your budget is pretty high and that application needs to grow, handles sensitive user data, automated deployments, unit testing together with Docker and Vagrant for local development. I would recommend using AWS or digital ocean.
The biggest problem using AWS is that it pushes the responsibility to you in keeping the operating system and PHP-version up-to-date.
With interprise customers, I would recommend using services like:
Use a security scan (https://detectify.com/)
Use a Firewall (https://www.cloudflare.com/en-gb/)
Basically it all depends on the type of customer you are dealing with.
But for a really small / tiny projects, just use a shared hosting and basically never forget to use CSRF, ReCaptcha, throttling requests, ... Be smart about it.
We are building a video call application utilising Amazon Chime SDK. Our application serves customers in the UK and need to be GDPR compliant.
Amazon Chime's compliance info page doesn't explicitly state anything in relations to GDPR compliance. However AWS itself states it is, and Chime is a service under AWS.
So we are not sure if Chime itself is GDPR compliant. Could someonese please advice if have any relevant information to confirm or deny Chime's GDPR compliance conclusively.
After multiple attempts we did get a response - albeit vague - from AWS.
At the foundation of Amazon Chime security is Amazon Web Services
(AWS) Security. AWS regions and networks are built and operated to
meet the requirements of some of the world’s most security-sensitive
organizations. AWS constantly undergoes third-party audits by a
variety of public sector and private sector auditing organizations in
order to maintain its status under multiple compliance offerings, such
as the credit card industry’s PCI DSS Level 1, the U.S. Government’s
FedRAMP program, C5 Certification in Germany, and IRAP assessment by
the Australia Government. For more information, see the AWS Security
and AWS Compliance websites. Amazon Chime is designed and operated
according to the same AWS standards, has undergone the compliance
process required to be a HIPAA-eligible service, and is currently in
the process of being added to other relevant compliance programs.
The Amazon Chime SDK can be used by customers who incorporate GDPR
best practices and compliance using our Shared Responsibility Model.
So they seem to imply it can be used in a GDPR compliant way.
Additional info: Specific to chat feature, AWS advised us to use the data-messaging API route to ensure the data relay and retention within EU.
All chat messages in the Chime app are relayed and stored in us-east-1
(Virginia). The chat messages always leave the UK.
There is a data messaging API in the SDK that can be use to build
chat.
(https://aws.github.io/amazon-chime-sdk-js/modules/apioverview.html#9-send-and-receive-data-messages-optional)
These messages flow through the same region that is used to host the
meeting (London, for example) and they are persisted there for a few
minutes and until the end of the meeting so that they can be relayed
to other participants during that meeting.
I believe Amazon Chime is not GDPR compliant. The website provides no way to export existing user data. The documented approach to exporting history is to scroll back in the chat history and copy paste:
https://answers.chime.aws/questions/629/how-can-i-save-all-the-data-from-a-chat-room-or-co.html
Talk to your AWS technical POC. I am sure they can help you understand this better. AWS is a big ecosystem of services. Chime used with other services can be made GDPR compliant.
For instance, all Chime events are tracked via AWS EventBridge. Should be pretty easy to attribute and track all data for a specific user.
Google APIs can have usage limits, both on a per-user and a per-application basis. For example, the GMail API free tier is limited to a billion daily quota units across all users of your application.
This works for well-designed server-side applications, which can centrally ensure they obey these usage limits. However, I’m not sure how this is supposed to work for client-side apps. As Google’s documentation says,
Installed apps are distributed to individual devices, and it is assumed that these apps cannot keep secrets.
These apps are still supposed to use a client_secret and credentials, but these are assumed to not be confidential despite the name. However, just saying they aren’t secret doesn’t prevent abuse; a user of the app can take the credentials file and use it for a different purpose, perhaps one that uses the APIs more. What can an application developer do to prevent people doing this from burning through all the available quota?
Edit for clarification:
The use case that prompted this is a purely desktop app that doesn’t connect to any service except GMail (see https://github.com/mbrt/gmailctl/issues/48). If it weren’t for a global quota for all users of the app, there would be no reason to worry about individual users at all; they don’t connect to any service except GMail itself.
You could write a server app (a Cloud Function would work) which holds the secrets. Clients call your endpoint with some form of identifier and you return an Access Token. If your users have a browser, they can auth each time; if not you would need to request a Refresh Token which you store and use that to generate an AT.
I am using the Google Client Library for Java SDK in my Android app to interface with Google Drive.
Do Google act as a Data Controller or Data Processor by using this SDK? I need to know if I need to store any data to show the user has consented to my app interfacing with Google Drive in line with GDPR.
I know I need to ask permission for personalised or non-personalised ads but the Google Drive SDK and GDPR stuff is driving me crazy.
Thanks
Disclaimer I am not a legal type person this is my opinion from the guidelines that we have been given. You should also seek independent legal advice relating to your status and obligations under the GDPR, as only a lawyer can provide you with legal advice specifcally tailored to your situation.
For refrence I am going to quote from the following documents which as of my writing are the only thing Google has released with regard to GDPR that i am aware of ath this time
Google Cloud & the General Data Protection Regulation
GOOGLE CLOUD & THE GDPR WHITEPAPER
Google Cloud & the General Data Protection Regulation (GDPR)
G Suite1
and Google Cloud Platform customers will typically act as
the data controller for any personal data they provide to Google in
connection with their use of Google’s services. The data controller
determines the purposes and means of processing personal data,
while the data processor processes data on behalf of the data
controller. Google is a data processor and processes personal data
on behalf of the data controller when the controller is using G Suite
or Google Cloud Platform.
Data controllers are responsible for implementing appropriate
technical and organisational measures to ensure and demonstrate
that any data processing is performed in compliance with the GDPR.
Controllers’ obligations relate to principles such as lawfulness,
fairness and transparency, purpose limitation, data minimisation,
and accuracy, as well as fulfilling data subjects’ rights with respect
to their data.
If you are a data controller, you may find guidance related to your
responsibilities under GDPR by regularly checking the website of
your national or lead data protection authority under the GDPR (as
applicable)2, as well as by reviewing publications by data privacy
associations such as the International Association of Privacy
Professionals (IAPP).
You should also seek independent legal advice relating to your status
and obligations under the GDPR, as only a lawyer can provide you with
legal advice specifcally tailored to your situation. Please bear in mind
that nothing on this website is intended to provide you with, or should
be used as a substitute for legal advice.
Gsuite is Googles sweet of tools that being Drive, Calendar ... they are the data controller for the data behind the Google tools.
Controller vs. Processor
(7) ‘controller’ means the natural or legal person, public authority, agency or other body which, alone or jointly with others, determines the purposes and means of the processing of personal data; where the purposes and means of such processing are determined by Union or Member State law, the controller or the specific criteria for its nomination may be provided for by Union or Member State law;
(8) ‘processor’ means a natural or legal person, public authority, agency or other body which processes personal data on behalf of the controller;
IMO
If you are accessing a users data on Google Drive and changing it or doing anything with it then yes you are going to need to tell them what you are using their data for and log their consent. If you are saving their data anywhere then you are also going to have to give them the ability to delete that data.
There are some things you cant do for example if they want to delete all their files on drive thats not your responsibility that's Googles. You are only responsible for the data thats on your system and what you have done with it.
Using googles client library IMO doesn't have much to do with GDPR its what you are doing with the data that they return that matters. I did contact google a few months ago hoping to get some official guidelines with regard to GDPR and the client libraries. I have not heard anything as of yet.
I'm currently building a RESTful API to our web service, which will be accessed by 3rd party web and mobile apps. We want to have certain level of control over API consumers (i.e. those web and mobile apps), so we can do API requests throttling and/or block certain malicious clients. For that purpose we want every developer who will be accessing our API to obtain an API key from us and use it to access our API endpoints. For some API calls that are not dealing with the specific user information, that's the only required level of authentication & authorization, which I call "app"-level A&A. However, some API calls deal with information belonging to the specific users, so we need a way to allow those users to login and authorize the app to access their data, which creates a second level (or "user"-level A&A).
It makes a lot of sense to use OAuth2 for the "user"-level A&A and I think I have a pretty good understanding of what I need to do here.
I also implemented OAuth1-like scheme, where app developers receive a pair of API key & secret, supply their API key with every call and use secret to sign their requests (again, it's very OAuth1 like and I should probably just use OAuth1 for that).
Now the problem that I have is how to marry those two different mechanisms. My current hypothesis is that I continue to use API key/secret pair to sign all requests to be able to access all API endpoints and for those calls that require access to user-specific information apps will need to go through OAuth2 flow and obtain access tokens and supply them.
So, my question to the community is - does it sounds like a good solution or there are some better ways to architect this.
I'd also appreciate any links to existing solutions that I could use, instead of re-inventing the wheel (our services is Ruby/Rails-based).
Your key/secret pair isn't really giving you any confidence in the authorship of mobile apps. The secret will be embedded in the executable, then given to users, and there's really nothing you can do to prevent the user from extracting the key.
In the Stack Exchange API, we just use OAuth 2.0 and accept that all we can do is cutoff abusive users (or IPs, in earlier revisions without OAuth). We do provide keys for tracking purposes, but they're not secret (and grant nothing of value, so there's no incentive to steal them).
In terms of preventing abuse, what we do is throttle based on IP in the absence of an auth token, but switch to a per-user throttle when there is one.
When dealing with purely malicious clients, we unleash the lawyers (malicious in our case is almost always violation of cc-wiki guidelines); technical solutions aren't sophisticated enough in our estimation. Note that the incidence of malicious clients is really really low (single digits in years of operation, with millions of daily API requests).
In short, I'd ditch OAuth 1.0 and switch your throttles to a hybrid of IP and user based.