I've been playing around with writing a custom resource for AWS which combines other resources in a useful way. (It's too complex to achieve effectively with a Terraform module.)
The documentation (starting with the Plugins page) outlines how to create a completely new resource from scratch. However, is it possible to "attach" my custom resource to the AWS provider? This would allow me to:
name my resources e.g. aws_foo instead of awscontrib_foo
presumably, access AWS credentials already defined for that provider
You can use the following provider to do exactly the same with Custom Resources in AWS CloudFormation.
https://github.com/mobfox/terraform-provider-multiverse
you can use even AWS Lambda and use any language you like to manage your resources, it also keep state of your resource, so you can delete, read, update them too. It create a resource, so it is not like External Data
Yes, the process is outlined here
https://github.com/hashicorp/terraform#developing-terraform
Your customised terraform can be in your own version of the AWS plugin
Related
I'm making an integration with a user-supplied GCS bucket. The user will give me a service account, and I want to verify that service account has object write permissions enabled to the bucket. I am failing to find documentation on a good way to do this. I expected there to be an easy way to check this in the GCS client library, but it doesn't seem as simple as myBucket.CanWrite(). What's the right way to do this? Do I need to have the bucket involved, or is there a way, given a service account json file, to just check that storage.objects.create exists on it?
IAM permissions can be granted at org, folder, project and resource (e.g. GCS Bucket) level. You will need to be careful that you check correctly.
For permissions granted explicitly to the bucket:
Use APIs Explorer to find Cloud Storage service
Use Cloud Storage API reference to find the method
Use BucketAccessControls:get to retrieve a member's (e.g. a Service Account's) permission (if any).
APIs Explorer used (sometimes) has code examples but, knowing the method, you can find the Go SDK.
The documentation includes a summary for ACLs using the List method, but I think you'll want to use Get (or equivalent).
NOTE I've not done this.
There doesn't appear to be a specific match to the underlying API's Get in the Go library.
From a Client, you can use Bucket method with a Bucket name to get a BucketHandle and then use the ACL method to retrieve the bucket's ACL (which should include the Service Account's email address and role, if any).
Or you can use the IAM method to get the bucket's IAM library's (!) Handle and then use the Policy method to get the resource's IAM Policy which will include the Service Account's email address and IAM role (if any).
Because of DazWilkin answer, you can check the permission at different level and it can be difficult to clearly know if an account as a permission.
For that, Google Cloud released a service: IAM troubleshooter. It's part of Policy Intelligence suite that help your to understand, analyse and troubleshoot the IAM permissions.
You have the API to call in the documentation.
In the AWS Lambda service's console, there is a Configuration tab called Database proxies, shown here:
However, in the Terraform registry's entry for an AWS Lambda Function, there does not seem to be a place to define this relationship for my lambda. It's easy enough to add manually after I deploy the Lambda, but for obvious reasons this isn't optimal. It seems like using a DB proxy is a common enough use case for serverless architectures that there would be a way to do this with the resources I've referenced.
What am I missing?
EDIT: As of 9 months ago, this feature was not included in the AWS Provider, but I'm unsure of how to search upcoming nightly or perhaps dev releases of Terraform for this feature...
EDIT EDIT (from my comment below): the RDS, its proxy, the roles they use, the lambdas, and the vpc in which they sit all work as expected. if I go to the above screenshot in the lambdas I am deploying, I can Add database proxy just fine using the proxy I deployed with Terraform. There are no issues with the code, nor any errors. The problem is that having to manually add the Database Proxy to each Lambda I deploy defeats the purpose of using Terraform.
I am looking for tools/software for dynamically changing Linux conf/YAML files via API or tool like Consul.
If you have any experience on consul, please give feedback about creating templates for conf/YAML files, and without using service can it be done via consul?
Consul Template or Gomplate can be used to template configuration files based on changes in a backend data source.
https://learn.hashicorp.com/tutorials/consul/consul-template provides a basic example of a template which regenerates file contents when keys are added to Consul's key-value store.
I'm writing a Helm chart for a custom application that we'll need to bring up in different environments within my organization. This application has some pieces in Kubernetes (which is why I'm writing the Helm chart) and other pieces outside of K8S, more specifically various resources in AWS which I have codified with Terraform.
One of those resources is a Lambda function, which I have fronted with API Gateway. This means that when I run the Terraform in a new environment, it creates the Lambda function and attaches an API Gateway endpoint to it, with a brand new URL which AWS generates for that endpoint. I'm having Terraform record that URL as an output variable, and moreover I have a non-local backend configured so that Terraform is saving its state remotely.
What I want to do is tie them both together, directly from Helm. I want a way to run the Terraform so that it brings up my Lambda, and by doing so saves the generated API Gateway URL to its remote state file. Then when I install my Helm chart, I'd like it if Helm were smart enough to automatically pull from the Terraform remote state file to get the URL it needs of the API Gateway endpoint, to use as a variable within my chart.
Currently, I either have to copy and paste, or use Bash. I can get away with doing it with a bash script much like this one:
#!/bin/bash
terraform init
terraform plan -out=tfplan.out
terraform apply tfplan.out
export WEBHOOK_URL=$(terraform output webhook_url)
helm install ./mychart --set webhook.url="${WEBHOOK_URL}"
But using a Bash script to accomplish this is not ideal. It requires that I run it in the same directory as the Terraform files (because the output command must be called from that directory), and it doesn't account for different methods of authentication we might use. Moreover, other developers on the team might want to run Terraform and Helm directly and not have to rely on a custom bash script to do it for them. Since this bash script is effectively acting as an "operator," and since Helm already is kind of an operator itself, I'm wondering if there's some way I can do it entirely within Helm?
The Terraform remote state files are ultimately just JSON files. I happen to be using the Consul backend, but I could just as easily use the S3 backend or any other; at the end of the day Terraform will manifest its state as a JSON file somewhere, where (presumably) Helm could read it and pick out the specific output value. Except I'm not sure if Helm is powerful enough to do this. Looking over their documentation, I didn't really see anything outside of writing your normal values.yaml templates to specify defaults. Does Helm have any functions built into it around making REST requests for external JSON? Is this something that could be done?
Helm does not have any functionality to search in files/templates.
It needs for you to tell it exactly what to inject.
I am writing a serverless application which is connected to DynamoDB.
Currently I am reading the access key ID and security access key from a json file.
I am going to use Jenkins for CI and need a way to secure these keys.
What I am going to do is setting the keys as environmental variables and read them in the application. But the problem is I don't know how to set the environmental variables every time a lambda function is started.
I have read there's a way to configure this in serverless.yml file, but don't know how.
How to achieve this?
Don't use environment variables. Use the IAM role that is attached to your lambda function. AWS Lambda assumes the role on your behalf and sets the credentials as environment variables when your function runs. You don't even need to read these variables yourself. All of the AWS SDKs will read these environment variables automatically.
There's a good guide on serverless security, which among other topics, cover this one as well. It's similar to the OWASP top 10:
In general, the best practice would be to use the AWS Secrets Manager, together with SSM parameter store.