Getting a bit lost in the diverse documentation endpoints (here, here, to name a few…)
This one is pretty usable for a given account by providing a json key as an environment variable.
The thing is, I just don't see how commands can be run on the behalf of a user authenticated via oauth — practically speaking, where do you specify the oauth user token ?
Thanks for sharing this insight
Best
google-cloud-ruby (which you linked in your question) is designed to provide access via service account credentials, as you noted. For help with "lower-level" access in which you managing your own OAuth tokens, you might consider google-auth-library-ruby. However, if you can use a service account instead of a user account to use the higher-level access provided by google-cloud-ruby, I believe it's probably the best approach, as recommended in Google Cloud Storage Authentication:
Due to the complexity of managing and refreshing access tokens and the security risk when dealing directly with cryptographic applications, we strongly encourage you to use a verified client library.
Related
I am writing a server side python script with Pydrive which needs to store a file in a specific gdrive. Pydrive and this post suggest to use a service account.
However this would mean that with the credentials of this service account all gdrives are accessible and I would rather avoid that.
Ideal only one specific gdrive or all gdrives where one specific user has access to should be accessible.
Is it possible to give programmatically access to only one specific gdrive?
[Edit]
As mentioned in the comments I am apparently not looking for a OAuth flow.
I am looking for a server-to-server communication for accessing one specific google drive using the principle of least privilege access. Doing this with a service account + domain wide delegate and google drive r/w scope would mean that with this service account all google drives can be accessed which is not what I want.
Unfortunately there is a domain wide policy in place which forbids to share google drives to "other" domains. This means I can not use a service account without domain wide delegation and just share the drive with it.
I don't understand what you mean by "programmatically", when you already tag the question as oAuth - asking for oAuth2 flow, which is interactive. When there is nobody, who would press the buttons, this probably isn't the authentication flow you're looking for. Just share a directory with a service-account; no domain-wide delegation is required (with that enabled, there would be no need to share it).
One could even abstract the whole Drive API access credentials away by using a simple Cloud Function, which has to task to update one file; triggered through HTTP, utilizing the Drive API.
Possible approach - dummy account
You could designate a new account that will be your "service account". In reality it won't be an actual service account, it will just be a dummy account that you can call something like "gdrivebot#yourdomain.com". Then you can share only what is absolutely necessary with it. I think this would be the only way to get that level of fine-grained control that you are looking for. This would require your admin to designate a new account just for this purpose though.
I would like to know if providing temporary url to access AWS bucket objects are secured in a sense that it discloses the AWS Access Key.
From my knowing if using IAM user's access key for accessing AWS bucket rather than root Access key can be highly secured if the IAM user is only permitted to read/write S3 services.
Are there any disadvantages of providing temporary url to the public using IAM user's access key?
Regards.
Presigning a URL doesn't give away your private key (secret), so there is no risk. An attacker couldn't take a signed URL and alter it to do something else, as the signature verifies the original payload.
You are correct: it's considered good practice to lock down IAM profiles to specific tasks, such as a specific application environment.
According to the official AWS docs:
We strongly recommend that you do not use the root user for your
everyday tasks, even the administrative ones. Instead, adhere to the
best practice of using the root user only to create your first IAM
user. Then securely lock away the root user credentials and use them
to perform only a few account and service management tasks.
Your disadvantages question isn't clear to me, so I'll answer two ways I think you might have meant it:
Tailored IAM profiles over root:
No disadvantage. Requires a bit more time and awareness to plan your policy / permission requirements, but that's a good thing.
Pre-signed URLs over conventional uploads / downloads
This depends on your use-case. Generally speaking, there are no extra security considerations when using presigned URLs. Just set a realistic expire time and don't give it to the wrong person. It's a lot like a session / bearer token that way.
In terms of advantages, they open up doors to make your application more scalable, and removes the need for your application to waste cycles "watching" an authorized upload or download. Vapor (Laravel 6 on Lambda) promotes presigned URLs as a feature for file uploads.
I have three client-facing web applications all on different subdomains (one of these web applications actually has 700+ different subdomains that are changing all the time). I've written an oAuth server that I was going to use to allow users to login to each of these systems; this works, but I've begun running into differences between what's happening and what I would like the behavior to actually be when writing the logout code.
Some of my requirements for single sign-on are:
If logged in on one system, you are logged in on all systems (obviously).
If logging out of one system, you are logged out of all systems. Even across subdomains.
If you are logged in on two different machines, for example -- a cellphone and a desktop. When logging out on your cellphone, do NOT logout on your desktop.
We already have written the oAuth provider and we'll be using it for projects not coupled to our domain (API's, etc.), but I'm not entirely convinced that oAuth is the best solution to use to meet the requirements outlined above. I'm thinking that maybe a shared session would be better. The shared session idea would involve a cookie stored on the main domain that has information about the currently logged-in user.
What are the pros and cons of either approach? Are there other approaches you might take? Are there security risks to consider? Concurrency and scalability considerations? Please help!
I would have taken oauth route with a variance.
Oauth :
The approach I would prefer is - access token issued at a device level (User- Application/Device) .
Ie there will be a process for registering your device and granting access for it.
This will result in the generation of an access token specific to the device and stored in it as the case may be. (For eg:- for mobile you may need a longer expiry access token and webpage a lesser duration one).
This way you can decouple the login/log-out across devices.
However the con of this approach is:
More complicated implementation, as it involves device registration
Tracking the action of each user will be difficult as you have two or more access token tied to a user.
Pros :
This is a fairly standard way
The Con 2 can be worked around (Adding access token attributes etc).
Session based SSO management
Pros :
Simpler than OAuth
Cons :
Security constraints around -session/cookie handling
Extendability at later point to add more use-cases is limited
I'm currently building a RESTful API to our web service, which will be accessed by 3rd party web and mobile apps. We want to have certain level of control over API consumers (i.e. those web and mobile apps), so we can do API requests throttling and/or block certain malicious clients. For that purpose we want every developer who will be accessing our API to obtain an API key from us and use it to access our API endpoints. For some API calls that are not dealing with the specific user information, that's the only required level of authentication & authorization, which I call "app"-level A&A. However, some API calls deal with information belonging to the specific users, so we need a way to allow those users to login and authorize the app to access their data, which creates a second level (or "user"-level A&A).
It makes a lot of sense to use OAuth2 for the "user"-level A&A and I think I have a pretty good understanding of what I need to do here.
I also implemented OAuth1-like scheme, where app developers receive a pair of API key & secret, supply their API key with every call and use secret to sign their requests (again, it's very OAuth1 like and I should probably just use OAuth1 for that).
Now the problem that I have is how to marry those two different mechanisms. My current hypothesis is that I continue to use API key/secret pair to sign all requests to be able to access all API endpoints and for those calls that require access to user-specific information apps will need to go through OAuth2 flow and obtain access tokens and supply them.
So, my question to the community is - does it sounds like a good solution or there are some better ways to architect this.
I'd also appreciate any links to existing solutions that I could use, instead of re-inventing the wheel (our services is Ruby/Rails-based).
Your key/secret pair isn't really giving you any confidence in the authorship of mobile apps. The secret will be embedded in the executable, then given to users, and there's really nothing you can do to prevent the user from extracting the key.
In the Stack Exchange API, we just use OAuth 2.0 and accept that all we can do is cutoff abusive users (or IPs, in earlier revisions without OAuth). We do provide keys for tracking purposes, but they're not secret (and grant nothing of value, so there's no incentive to steal them).
In terms of preventing abuse, what we do is throttle based on IP in the absence of an auth token, but switch to a per-user throttle when there is one.
When dealing with purely malicious clients, we unleash the lawyers (malicious in our case is almost always violation of cc-wiki guidelines); technical solutions aren't sophisticated enough in our estimation. Note that the incidence of malicious clients is really really low (single digits in years of operation, with millions of daily API requests).
In short, I'd ditch OAuth 1.0 and switch your throttles to a hybrid of IP and user based.
We are working on a service that will have website access for stats and other tasks, but the majority of use will be through a client gem and rake tasks. What is the best way to handle authentication for both pieces.
It looks like fiveruns_tuneup, getexceptional, New Relic and others have websites with username and pass, but use API keys stored in ./config/serviceName.yml Any reasons it is better to have API keys opposed to user/pass in the config (do they use keys because often the key is checked into SCM and used across the project, where ours would not be checked in and would be a per user setting)
GitHub has you put your public key on the github servers and uses that, but I think git supports public/private key by default.
Would it be preferred to keep a ./config/serviceName.yml or since we have to create a subdirectory with other information have ./serviceName/config.yml? (does the per user, not stored in SCM mean it is better to keep it all in one excluded directory?)
Just looking for some thoughts and ideas on best practices before starting implementation.
I recommend that you use username/password combos for website accounts, and API keys for any web services. Here are the advantages of this technique:
By linking API keys to an account, you could have many API keys for the same user. Perhaps this could be used for many remote web servers that consume this data service, or to perform unique tracking.
Attaching API keys to an account also lets you keep the user's username and password uncompromised since an API key will not contain them. Many users use the same username and password on many services, so you are helping to protect them.
You could limit access to portions of functionality for each API key, but give their username access to everything their account should have access to. Additionally, you can even give them the ability to limit how much access an API key might have.
Most of the major services (Yahoo! API, Flickr, Google API, etc) use accounts with a username and password to login to the web account, and API keys for integration points.
Never use user/pass when you can help it. The security issues are horrible. If the user/pass leaks out, you have to change your password or they get access to your whole account.
API keys are better because they're easier to change and can be limited to only the part you need access to with the APIs (ie, if someone has your password they can change your password. They can't if they just have an API key).
Different API key per client or secure token exchange (such as OAuth) is the best solution if you'll have more than just your client on the API.
The github approach is bootstrapping on top of existing git practices, however it's not a bad idea since presumably each user will have their own private key to match a published public one in the central authority. Since key-agent's already furnish a means of safe authentication this seems like a very safe approach. Public/private keys are a well thought out authentication scheme, which has unfortunately been reinvented many times to limited success.
The problem with the API key is that anyone who gets a copy of the API key can do whatever that authorizes. Storing the API key somewhere in the project begs the users to share a key. If you are associating public keys with a user, it is possible to grant rights to the client on a per user basis, and a proper key-agent approach suggests that those will not be stored in an SCM anywhere.
I'm not sure I follow what the distinction between config/serviceName.yml, or serviceName/config.yml is. It doesn't seem as if it would be pertinent if you have public/private keys as an authentication method for the client.