Google Cloud Storage - Handle rotating keys from outside the environment - rotation

Need a help on how do I handle rotating keys on google cloud storage thats managed by one google account but being accessed by an app thats running on another google cloud account. I tried searching for solutions but couldn't find an answer

With IAM service you can grant permission at project level, and, for some resources, at resources level.
It's the case for your KMS keys where you can grant permission on the key rings
Or directly at the key level
Choose what works the best for your use case, and grant the service account of the external project with the correct permission (decrypter to read the files in the GCS, encryter to write files)
Note: A key rotation is a new version of a key.

Related

Azure Storage Container : Access Control (IAM) to a App Registration has a different Object ID than App Registration in AAD

I set up a Storage Container (Blob) and a Role Assignement (Storage Account Contributor) to a App Registration with a client Secret-> so I can query the blob files in a runbook as a service principal. So far so fine. App Registartion has API Permission to Azure Storage and it run fine.
I then wanted to check my error-handling and output of the runbook when permissions are missing and removed the API Permission to Azure Storage on the App Registration. And nothing changed at all...The runbook succesfully created storage context and down-/uploaded the file without problem.
After some digging, I noticed that the object-id of the App Registration is different when I look at it in Access Control (IAM) of the storage container than when I load the object in Azure Active Directory (see pic below). So I thought well there must be some "noise" and removed and re-added the Role Assignement to the container. I then run into the error as expected.
After successfully worked on my error-handling i re-applied the permissions and...the error wont disapear. So I again looked at the objects and again...die object-ids where different. I had to remove the RBAC and re-add it to reflect the permission change. After re-adding still the same issue. I have different ID's.
Does anyone know why thats different? And why wont it reflect the permission change withour remove-re-add?
Thank you!
Storage Container vs AAD:
Azure AD app registration is backed by 2 directory objects: an application and a service principal. As its name implies, the latter is the principal for authentication/authorization. Thus, you will see 2 object ids.
Regarding the access issue, all the principal (user or service) needs is an RBAC role assigned, thus adding or removing application permissions won't make a difference.

How do I check that a Google Cloud service account has a particular permission programmatically?

I'm making an integration with a user-supplied GCS bucket. The user will give me a service account, and I want to verify that service account has object write permissions enabled to the bucket. I am failing to find documentation on a good way to do this. I expected there to be an easy way to check this in the GCS client library, but it doesn't seem as simple as myBucket.CanWrite(). What's the right way to do this? Do I need to have the bucket involved, or is there a way, given a service account json file, to just check that storage.objects.create exists on it?
IAM permissions can be granted at org, folder, project and resource (e.g. GCS Bucket) level. You will need to be careful that you check correctly.
For permissions granted explicitly to the bucket:
Use APIs Explorer to find Cloud Storage service
Use Cloud Storage API reference to find the method
Use BucketAccessControls:get to retrieve a member's (e.g. a Service Account's) permission (if any).
APIs Explorer used (sometimes) has code examples but, knowing the method, you can find the Go SDK.
The documentation includes a summary for ACLs using the List method, but I think you'll want to use Get (or equivalent).
NOTE I've not done this.
There doesn't appear to be a specific match to the underlying API's Get in the Go library.
From a Client, you can use Bucket method with a Bucket name to get a BucketHandle and then use the ACL method to retrieve the bucket's ACL (which should include the Service Account's email address and role, if any).
Or you can use the IAM method to get the bucket's IAM library's (!) Handle and then use the Policy method to get the resource's IAM Policy which will include the Service Account's email address and IAM role (if any).
Because of DazWilkin answer, you can check the permission at different level and it can be difficult to clearly know if an account as a permission.
For that, Google Cloud released a service: IAM troubleshooter. It's part of Policy Intelligence suite that help your to understand, analyse and troubleshoot the IAM permissions.
You have the API to call in the documentation.

Is there a method preventing dynamic ec2 key pairs from being written to tfstate file?

We are starting to use Terraform to build our aws ec2 infrastructure but would like to do this as securely as possible.
Ideally we would like to do is to create a key pair for each Windows ec2 instance dynamically and store the private key in Vault. This is possible, but I cannot think a way of implementing it without having the private key written to the tfstate file. Yes I know I can store the tfstate file in an encrypted s3 bucket but this does not seem like an optimal secure solution.
I am happy to write custom code if needs be to have the key pair generated via another mechanism and the name passed as a variable to Terraform but dont want to if there are other more robust and tested methods out there. I was hoping we could use Vault to do this but on researching it does not look possible.
Has anyone got any experience of doing something similar? Failing that, any suggestions?
The most secure option is to have an arbitrary keypair you destroy the private key for and user_data that joins the instances to a AWS Managed Microsoft AD domain controller. After that you can use conventional AD users, and groups to control access to the instances (but not group policy in any depth, regrettably). You'll need a domain member server to administrate AD at that level of detail.
If you really need to be able to use local admin on these Windows EC2 instances, then you'll need to create the keypair for decrypting the password once manually and then share it securely through a secret or password manager with other admins using something like Vault or 1Password.
I don't see any security advantage to creating a keypair per instance, just considerable added complexity. If you're concerned about exposure, change the administrator passwords after obtaining them and store those in your secret or password manager.
I still advise going with AD if you are using Windows. Windows with AD enables world-leading unified endpoint management and Microsoft has held the lead on that for a very long time.

Application Administrator AD role not providing correct permissions

I'm trying to apply the 'Application Administrator'role to a service principal to allow it to create other service principals in AD. I would have assumed that having the ability to manage all aspects of app registrations etc as explained in the docs here: https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/active-directory/users-groups-roles/directory-assign-admin-roles.md would have allowed me to do this but i still cannot create new service principals in this way?
It looks as if it has created when looking in AD App Registrations but errors out with insufficient privileges
I have tried several approaches through bash & powershell, trying to create the AD application first then creating a service principal from that application id, also tried with the 'Global Admin' role and that works as expected however we're trying to limit as much as possible.
The command i'm trying to run in bash is
az ad sp create-for-rbac -n $spn_name --skip-assignment
And the equivalent in powershell
New-AzAdServicePrincipal -ApplicationId $appid
From an SPN with only the 'Application Administrator' role assigned.
Creating service principal failed for appid 'http://test-spn1'. Trace followed:
{Trace JSON}
Insufficient privileges to complete the operation.
To grant an application the ability to create, edit and delete all aspects of apps (both Application objects and ServicePrincipal objects, represented in the portal under App Registrations and Enterprise Apps, respectively), you should consider the following two app-only permissions (instead of the directory role):
Application.ReadWrite.All - Create Application and ServicePrincipal objects and manage any Application and ServicePrincipal objects.
Application.ReadWrite.OwnedBy - Create Application and ServicePrincipal objects (and automatically get set as owner), and manage Application and ServicePrincipal objects it is owner of (either because it created them, or because it was assigned as an owner).
These permissions are pretty close to what the Application Administrator directory role allows for users. They're available for both Azure AD Graph API (which is the API used by the Azure CLI, the Azure AD PowerShell module (AzureAD), and the Azure PowerShell module (Az)), and Microsoft Graph API (which you should not use for production scenarios, as the application and servicePrincipal entitles are still in beta). The permissions are documented here:
* https://learn.microsoft.com/graph/permissions-reference#application-resource-permissions
Warning: Both of these permissions are very high privilege. By being able to manage Application and ServicePrincipal objects, they can add credentials for those objects (keyCredentials and passwordCredentials) and in doing so, exercise any access which has been granted to those other apps. If an app granted Application.ReadWrite.All is compromised, pretty much all apps are compromised.

Using Amazon S3 in place of an SFTP Server

I need to set up a repository where multiple people can go to drop off excel and csv files. I need a secure environment that has access control so customers logging on to drop off their own data can't see another customers data. So if person A logs on to drop a word document they can't see person B's excel sheet. I have an AWS account and would prefer to use S3 for this. I originally planned to setup an SFTP server on an EC2 server however, I feel that using S3 would be more scalable and safer after doing some research. However, I've never used S3 before nor have I seen it in a production environment. So my question really comes down to this does S3 provide a user interface that allows multiple people to drop files off similar to that of an FTP server? And can I create access control so people can't see other peoples data?
Here are the developer resources for S3
https://aws.amazon.com/developertools/Amazon-S3
Here are some pre-built widgets
http://codecanyon.net/search?utf8=%E2%9C%93&term=s3+bucket
Let us know your angle as we can provide other ideas knowing more about your requirements
Yes. It does, you can actually control access to your resources using IAM users and roles.
http://aws.amazon.com/iam/
You can allow privileges to parts of an S3 bucket say depending on the user or role for example:
mybucket/user1
mybucket/user2
mybucket/development
could all have different permissions.
Hope this helps.

Resources