When using the CLI gcloud commands, I can do everything action on my database. Yet when I try to do the same thing from Go (from the same shell instance as I did when using the gcloud commands) I get an error with the message:
spanner: code = "PermissionDenied", desc = "Resource projects/todo/instances/todospanner/databases/tododb is missing IAM permission: spanner.sessions.create."
The code I am trying to run is taken from the example found here: https://cloud.google.com/spanner/docs/getting-started/go/
I can't find that permission (spanner.session.create) in the spanner permissions either. I've been playing around with setting all permissions I could find related to spanner, on the account which I've used to log in with gcloud.
my GOOGLE_APPLICATION_CREDENTIALS are set and I've also tried with gcloud beta auth.
Cloud Spanner IAM roles including the permission spanner.session.create are listed and described here: https://cloud.google.com/spanner/docs/iam#roles
Note how some of the roles are specific to a Person while others are Machine-specific (or Service Account specific).
You need to specify where are you connecting from or executing the code (Cloud Shell instance, VM running on GCE, on-prem machine or laptop) and to ensure that correct roles are assigned to a Person or a Service Account which is attempting to execute the code and access Cloud Spanner instance.
Consider this scenario:
your gcloud SDK may be well credentialed with person#domain.com account which has granted roles/spanner.admin role, so everything works fine for gcloud
the VM hosting your code and SDK is running as 12345678901-compute#developer.gserviceaccount.com Service Account and that one has no access to Cloud Spanner whatsoever, causing troubles.
More information on Service Accounts here:
https://cloud.google.com/compute/docs/access/service-accounts
Probably you didn't add access to your database tododb for account in the file pointed by GOOGLE_APPLICATION_CREDENTIALS. Use, for example, Cloud Spanner Database User role for this account in Google Console.
Related
I set up a Storage Container (Blob) and a Role Assignement (Storage Account Contributor) to a App Registration with a client Secret-> so I can query the blob files in a runbook as a service principal. So far so fine. App Registartion has API Permission to Azure Storage and it run fine.
I then wanted to check my error-handling and output of the runbook when permissions are missing and removed the API Permission to Azure Storage on the App Registration. And nothing changed at all...The runbook succesfully created storage context and down-/uploaded the file without problem.
After some digging, I noticed that the object-id of the App Registration is different when I look at it in Access Control (IAM) of the storage container than when I load the object in Azure Active Directory (see pic below). So I thought well there must be some "noise" and removed and re-added the Role Assignement to the container. I then run into the error as expected.
After successfully worked on my error-handling i re-applied the permissions and...the error wont disapear. So I again looked at the objects and again...die object-ids where different. I had to remove the RBAC and re-add it to reflect the permission change. After re-adding still the same issue. I have different ID's.
Does anyone know why thats different? And why wont it reflect the permission change withour remove-re-add?
Thank you!
Storage Container vs AAD:
Azure AD app registration is backed by 2 directory objects: an application and a service principal. As its name implies, the latter is the principal for authentication/authorization. Thus, you will see 2 object ids.
Regarding the access issue, all the principal (user or service) needs is an RBAC role assigned, thus adding or removing application permissions won't make a difference.
I'm trying to apply the 'Application Administrator'role to a service principal to allow it to create other service principals in AD. I would have assumed that having the ability to manage all aspects of app registrations etc as explained in the docs here: https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/active-directory/users-groups-roles/directory-assign-admin-roles.md would have allowed me to do this but i still cannot create new service principals in this way?
It looks as if it has created when looking in AD App Registrations but errors out with insufficient privileges
I have tried several approaches through bash & powershell, trying to create the AD application first then creating a service principal from that application id, also tried with the 'Global Admin' role and that works as expected however we're trying to limit as much as possible.
The command i'm trying to run in bash is
az ad sp create-for-rbac -n $spn_name --skip-assignment
And the equivalent in powershell
New-AzAdServicePrincipal -ApplicationId $appid
From an SPN with only the 'Application Administrator' role assigned.
Creating service principal failed for appid 'http://test-spn1'. Trace followed:
{Trace JSON}
Insufficient privileges to complete the operation.
To grant an application the ability to create, edit and delete all aspects of apps (both Application objects and ServicePrincipal objects, represented in the portal under App Registrations and Enterprise Apps, respectively), you should consider the following two app-only permissions (instead of the directory role):
Application.ReadWrite.All - Create Application and ServicePrincipal objects and manage any Application and ServicePrincipal objects.
Application.ReadWrite.OwnedBy - Create Application and ServicePrincipal objects (and automatically get set as owner), and manage Application and ServicePrincipal objects it is owner of (either because it created them, or because it was assigned as an owner).
These permissions are pretty close to what the Application Administrator directory role allows for users. They're available for both Azure AD Graph API (which is the API used by the Azure CLI, the Azure AD PowerShell module (AzureAD), and the Azure PowerShell module (Az)), and Microsoft Graph API (which you should not use for production scenarios, as the application and servicePrincipal entitles are still in beta). The permissions are documented here:
* https://learn.microsoft.com/graph/permissions-reference#application-resource-permissions
Warning: Both of these permissions are very high privilege. By being able to manage Application and ServicePrincipal objects, they can add credentials for those objects (keyCredentials and passwordCredentials) and in doing so, exercise any access which has been granted to those other apps. If an app granted Application.ReadWrite.All is compromised, pretty much all apps are compromised.
I have google cloud build set up, and I'd like t away to make the builds publicly visible, to use in an open source project, a bit like how TravisCI and CircleCI offer - see an example below:
https://travis-ci.org/wagtail/Willow/pull_requests
Is this possible?
Can you make it possible to inspect a build to a non-signed in user?
A solution could be to use Google Identity and Access Management to grant the Cloud Build Viewer role to allUsers. However, this cannot be done at the moment.
The idea is to give the cloudbuild.builds.get and cloudbuild.builds.list permissions to everyone on the internet, which would allow them to call those Cloud Build API methods that require these permissions. You can grant roles to Google Accounts or Groups, Service accounts or G Suite domains, but not to everyone.
You can find detailed instructions to grant roles through the GCP console in the Cloud Build documentation.
One of my clients wants to understand IAM feature before migrating business application to Amazon cloud.
I have figured out two use cases which we can recommend to our client, these are:
Resource-Level Permissions for EC2
• Allow users to act on a limited set of resources within a larger, multi-user EC2 environment.
• Control which users can terminate which instances.
• Restricting a user access to a single EC2 instance ( currently not supported by amazon API’s)
IAM Roles for Amazon ec2 resources
Command Line Usage
• Unix/Linux/Windows - Use the AWS Command Line Interface, which is a unified tool to manage the AWS services. We can access the Command Line Interface using the EC2 instance launched with IAM role support without specifying the credentials explicitly.
Programmatic Usage
• Use the appropriate AWS SDK for your language of choice. Configure it without specifying the credentials.
I would like to know other capabilities of IAM which we can recommend to our client and other use cases which you can recommend to us. Please let us know if any further explanation is required.
Any prompt response will be highly appreciated.
Thanks in advance
This is a very useful feature of AWS !
User Management - If you are a large team, you will have to give different users (or developers/testing, deployment) different type of permissions. Access levels like (say S3 read-only, DynamoDB full-access etc).
Manage Users : http://aws.amazon.com/iam/details/manage-users/
Not to keep credentials in code. Is you use IAM roles, you can mention that say an EC2 should work on this role. This will help you achieve things like "cluster with only access to S3, not DB")
IAM Roles for Amazon EC2 - Amazon Elastic Compute Cloud : http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
Handle Release staging. This is a benefit from the ROLE. You move apps from dev, qa, staging and prod. I usually keep different accounts for this. In this case, if you configure the EC2 to run on roles, then the stage difference can be handled witout code change. Just move the build from one account to another, and it works with no risk!
Lot of other benefits;
Product Details : http://aws.amazon.com/iam/details/
How do I upload data to Google BigQuery with gsutil, by using a Service Account I created in the Google APIs Console?
First I'm trying to upload data to Cloud Storage using gsutil, as that seems to be the recommended model. Everything works fine with gmail user approval, but it does not allow me to use a service account.
It seems I can use the Python API to get an access token using signed JWT credentials, but I would prefer using a command-line tool like gsutil with support for resumable uploads etc.
EDIT: I would like to use gsutil in a cron to upload files to Cloud Storage every night and then import them to BigQuery.
Any help or directions to go would be appreciated.
To extend #Mike answer, you'll need to
Download service account key file, and put it in e.g. /etc/backup-account.json
gcloud auth activate-service-account --key-file /etc/backup-account.json
And now all calls use said service account.
Google Cloud Storage just released a new version (3.26) of gsutil that supports service accounts (as well as a number of other features and bug fixes). If you already have gsutil installed you can get this version by running:
gsutil update
In brief, you can configure a service account by running:
gsutil config -e
See gsutil help config for more details about using the config command.
See gsutil help creds for information about the different flavors of credentials (and different use cases) that gsutil supports.
Mike Schwartz, Google Cloud Storage Team
Service accounts are generally used to identify applications but when using gsutil you're an interactive user and it's more natural to use your personal account. You can always associate your Google Cloud Storage resources with both your personal account and/or a service account (via access control lists or the developer console Team tab) so my advice would be to use your personal account with gsutil and then use a service account for your application.
First of all, you should be using the bq command line tool to interact with BigQuery from the command line. (Read about it here and download it here).
I agree with Marc that it's a good idea to use your personal credentials with both gsutil and bq, the bq command line tool supports the use of service accounts. The command to use service account auth might look something like this.
bq --service_account 1234567890#developer.gserviceaccount.com --service_account_credential_store keep_me_safe --service_account_private_key_file myfile.key query 'select count(*) from publicdata:samples.shakespeare'
Type bq --help for more info.
It's also pretty easy to use service accounts in your code via Python or Java. Here's a quick example using some code from the BigQuery Authorization guide.
import httplib2
from apiclient.discovery import build
from oauth2client.client import SignedJwtAssertionCredentials
# REPLACE WITH YOUR Project ID
PROJECT_NUMBER = 'XXXXXXXXXXX'
# REPLACE WITH THE SERVICE ACCOUNT EMAIL FROM GOOGLE DEV CONSOLE
SERVICE_ACCOUNT_EMAIL = 'XXXXX#developer.gserviceaccount.com'
f = file('key.p12', 'rb')
key = f.read()
f.close()
credentials = SignedJwtAssertionCredentials(
SERVICE_ACCOUNT_EMAIL,
key,
scope='https://www.googleapis.com/auth/bigquery')
http = httplib2.Http()
http = credentials.authorize(http)
service = build('bigquery', 'v2')
datasets = service.datasets()
response = datasets.list(projectId=PROJECT_NUMBER).execute(http)
print('Dataset list:\n')
for dataset in response['datasets']:
print("%s\n" % dataset['id'])
Posting as an answer, instead of a comment, based on Jonathan's request
Yes, an OAuth grant made by an individual user will no longer be valid if the user no longer exists. So, if you use the user-based flow with your personal account, your automated processes will fail if you leave the company.
We should support service accounts with gsutil, but don't yet.
You could do one of:
Probably add the feature quickly to
gsutil/oauth2_plugin/oauth2_helper.py using the existing python
oauth client implementation of service accounts
Retrieve the access token externally via the service account flow and store it in the cache location specified in ~/.boto (slightly hacky)
Create a role account yourself (via gmail.com or google apps) and grant permission to that account and use it for the OAuth flow.
We've filed the feature request to support service accounts for gsutil, and have some initial positive feedback from the team. (though can't give an ETA)
As of today you don’t need to run any command to setup a service account to be used with gsutil. All you have to do is to create ~/.boto with the following content:
[Credentials]
gs_service_key_file=/path/to/your/service-account.json
Edit: you can also tell gsutil where it should look for the .boto file by setting BOTO_CONFIG (docs).
For example, I use one service account per project with the following config, where /app is the path to my app directory:
.env:
BOTO_CONFIG=/app/.boto
.boto:
[Credentials]
gs_service_key_file=/app/service-account.json
script.sh:
export $(xargs < .env)
gsutil ...
In the script above, export $(xargs < .env) serves to load the .env file (source). It tells gsutil the location of the .boto file, which in turn tells it the location of the service account. When using the Google Cloud Python library you can do all of this with GOOGLE_APPLICATION_CREDENTIALS, but that’s not supported by gsutil.