golang kubernetes api service account with rest.inclusterconfig - go

Is it possible to specify or change the service account to be used when accessing the kube api from within the cluster using rest.InClusterConfig in golang?
It seems to use the default service account (or the service account the pod running is under) but i want to use another service account.
I am aware that i can use BuildConfigFromFlags and use the configs from a config file that may be tied to a service account, but i wanted to see if it is possible to override the service account with rest.InClusterConfig

In Kubernetes, a Pod (or multiple for the same service) has a ServiceAccount. That is the way it is designed.
This ServiceAccount can be a specific that you create, you don't have to use a default ServiceAccount in a Namespace.

Related

How do I check that a Google Cloud service account has a particular permission programmatically?

I'm making an integration with a user-supplied GCS bucket. The user will give me a service account, and I want to verify that service account has object write permissions enabled to the bucket. I am failing to find documentation on a good way to do this. I expected there to be an easy way to check this in the GCS client library, but it doesn't seem as simple as myBucket.CanWrite(). What's the right way to do this? Do I need to have the bucket involved, or is there a way, given a service account json file, to just check that storage.objects.create exists on it?
IAM permissions can be granted at org, folder, project and resource (e.g. GCS Bucket) level. You will need to be careful that you check correctly.
For permissions granted explicitly to the bucket:
Use APIs Explorer to find Cloud Storage service
Use Cloud Storage API reference to find the method
Use BucketAccessControls:get to retrieve a member's (e.g. a Service Account's) permission (if any).
APIs Explorer used (sometimes) has code examples but, knowing the method, you can find the Go SDK.
The documentation includes a summary for ACLs using the List method, but I think you'll want to use Get (or equivalent).
NOTE I've not done this.
There doesn't appear to be a specific match to the underlying API's Get in the Go library.
From a Client, you can use Bucket method with a Bucket name to get a BucketHandle and then use the ACL method to retrieve the bucket's ACL (which should include the Service Account's email address and role, if any).
Or you can use the IAM method to get the bucket's IAM library's (!) Handle and then use the Policy method to get the resource's IAM Policy which will include the Service Account's email address and IAM role (if any).
Because of DazWilkin answer, you can check the permission at different level and it can be difficult to clearly know if an account as a permission.
For that, Google Cloud released a service: IAM troubleshooter. It's part of Policy Intelligence suite that help your to understand, analyse and troubleshoot the IAM permissions.
You have the API to call in the documentation.

Do i need to ask gcloud support for enabling exchange custom rules in VPC peering?

I have created two gcloud projects, one for cloud sql and one for kubernete cluster. For accessing SQL in project one i have set import export custom routes . Do i need to take gcloud confirmation for this or this is enough? as i have read somewhere that after these steps ask gcloud support for enable the exchange of custom routes for your speckle-umbrella VPC network associated with your instance that is automatically created upon the Cloud SQL instance is created.
As far as I know this step is not included in the public documentation and is not necessary if you are connecting from within the same project or from any service project ( if configured with Shared VPC), then you don't require to export those routes. This is generally required if you are connecting from on-prem.
If you are having anny issue please let us know

How can I configure application.properties using AWS CodeDeploy and/or CloudFormation?

I have a Spring Web Service deployed on Elastic Beanstalk. I'm using AWS CloudFormation for the infrastructure and I'm using AWS CodePipeline to deploy the web service automatically from merges to the master branch.
Recently I added DynamoDB integration, and I need to configure a couple things in my application.properties. I attempted to use environment variables to configure the application.properties but I hit a wall when trying to set the environment variables from CodeDeploy.
This is my application.properties
amazon.dynamodb.endpoint=${DYNAMODB_ENDPOINT:http://localhost:8000}
amazon.dynamodb.region=${AWS_REGION:default-region}
amazon.dynamodb.accesskey=${DYNAMODB_ACCESS_KEY:TestAccessKey}
amazon.dynamodb.secretkey=${DYNAMODB_SECRET_KEY:TestSecretKey}
spring.data.dynamodb.entity2ddl.auto = create-drop
spring.data.dynamodb.entity2ddl.gsiProjectionType = ALL
spring.data.dynamodb.entity2ddl.readCapacity = 10
spring.data.dynamodb.entity2ddl.writeCapacity = 1
The defaults are for when I'm running a local DynamoDB instance and they work fine. However, I can't figure out how to get CodeDeploy to set environment variables for me, I also considered getting CloudFormation to set the environment variables, but couldn't find how to do that either. I tried manually setting the environment variables in the EC2 instance but that didn't work and isn't the solution I'm looking for as I'm using EB and want this project to use fully automated deployments. Please let me know if this is possible, what the industry standard is for configuring web services, and if I'm misunderstanding either CodeDeploy or CloudFormation.
In general, it is a bad practice to include access and secret keys in any sort of files or in your deployment automation.
Your instance that your application is deployed to should have an instance profile (i.e. IAM Role) attached to it which should have the appropriate DynamoDB permissions you need.
If you have that instance profile attached, the SDK should automatically be able to detect the credentials, region and endpoint is needs to communicate with.
You may need to update the way you are creating your DynamoDB client to just use the defaults.
To setup your development machine with these properties in a way that the AWS SDK can retrieve without explicitly putting them in properties files, you can run the aws configure command of the AWS CLI which should setup your ~/.aws/ folder with information about your region and credentials to use on your dev machine.

Custom resource for existing Terraform provider?

I've been playing around with writing a custom resource for AWS which combines other resources in a useful way. (It's too complex to achieve effectively with a Terraform module.)
The documentation (starting with the Plugins page) outlines how to create a completely new resource from scratch. However, is it possible to "attach" my custom resource to the AWS provider? This would allow me to:
name my resources e.g. aws_foo instead of awscontrib_foo
presumably, access AWS credentials already defined for that provider
You can use the following provider to do exactly the same with Custom Resources in AWS CloudFormation.
https://github.com/mobfox/terraform-provider-multiverse
you can use even AWS Lambda and use any language you like to manage your resources, it also keep state of your resource, so you can delete, read, update them too. It create a resource, so it is not like External Data
Yes, the process is outlined here
https://github.com/hashicorp/terraform#developing-terraform
Your customised terraform can be in your own version of the AWS plugin

What are the possible capabilities of IAM in AWS?

One of my clients wants to understand IAM feature before migrating business application to Amazon cloud.
I have figured out two use cases which we can recommend to our client, these are:
Resource-Level Permissions for EC2
• Allow users to act on a limited set of resources within a larger, multi-user EC2 environment.
• Control which users can terminate which instances.
• Restricting a user access to a single EC2 instance ( currently not supported by amazon API’s)
IAM Roles for Amazon ec2 resources
Command Line Usage
• Unix/Linux/Windows - Use the AWS Command Line Interface, which is a unified tool to manage the AWS services. We can access the Command Line Interface using the EC2 instance launched with IAM role support without specifying the credentials explicitly.
Programmatic Usage
• Use the appropriate AWS SDK for your language of choice. Configure it without specifying the credentials.
I would like to know other capabilities of IAM which we can recommend to our client and other use cases which you can recommend to us. Please let us know if any further explanation is required.
Any prompt response will be highly appreciated.
Thanks in advance
This is a very useful feature of AWS !
User Management - If you are a large team, you will have to give different users (or developers/testing, deployment) different type of permissions. Access levels like (say S3 read-only, DynamoDB full-access etc).
Manage Users : http://aws.amazon.com/iam/details/manage-users/
Not to keep credentials in code. Is you use IAM roles, you can mention that say an EC2 should work on this role. This will help you achieve things like "cluster with only access to S3, not DB")
IAM Roles for Amazon EC2 - Amazon Elastic Compute Cloud : http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
Handle Release staging. This is a benefit from the ROLE. You move apps from dev, qa, staging and prod. I usually keep different accounts for this. In this case, if you configure the EC2 to run on roles, then the stage difference can be handled witout code change. Just move the build from one account to another, and it works with no risk!
Lot of other benefits;
Product Details : http://aws.amazon.com/iam/details/

Resources