Getting Around Terraform's Limitations - ruby

I'm trying to setup terraform to handle creation of fine-grained user permissions, and have been able to create:
Cognito User Pools, Identity Pools
IAM Roles, Permissions
What I'm struggling with is how to link them together. I have two types of user:
Standard User
Manager
As such, I have found two ways that I could use to correctly hook up the correct IAM policy upon login:
Method 1 - Create a custom attribute, and Use the "Choose Role With Rules" to set a rule to set an IAM policy based on the attribute
Method 2 - Create Cognito Groups, and link users and the required IAM policy to each group.
The problem, as far as I can see, is that Terraform doesn't currently support either of those cases, so I need to find a work around. So, my question is essentially, how do I get around terraform's lack of support in some areas?
I've seen some projects that use [Ruby, Go, etc.] to make up for some of the limitations, but I don't quite understand where to start and what is the best option for my needs. I haven't been able to find much in Google yet (possibly https://github.com/infrablocks/ruby_terraform). Does anyone have a good guide or resource I could use to get started?

If terraform does not support something you can use the local-exec provisioner to execute commands after resource creation. For example you could use the aws cli to add a custom attribute:
resource "aws_cognito_identity_pool" "main" {
# ...
provisioner "local-exec" {
command = "aws cognito-idp add-custom-attributes --user-pool-id ${aws_cognito_identity_pool.main.id} --custom-attributes <your attributes>"
}
}
local-exec docs

Related

grafana ec2-instance filter by tags

I am trying to set up a var template in grafana that would allow me to only show ec2 instances with specific ec2 tags. I did find
ec2_instance_attribute(us-east-1, InstanceId, {"tag:app": ["$application"]})
on a grafana community site and i changed it to
ec2_instance_attribute(us-west-2, InstanceId, {"tag:ENV": ["Prod"]})
The tags based off of my ec2 instance tags is and i keep getting a metric error. I tried removing the brackets and tweaking and still can't figure it out. Does anyone know how to create this using this method or another one i haven't thought of?
Your expression should be working. But I had this error also, due to a missing AWS Policy permission. This might be the case for you as well AWS has a default Policy to read CloudWatch data: "CloudWatchReadOnlyAccess ", but creating my own policy as a copy of "CloudWatchReadOnlyAccess" and adding "ec2:DescribeTags" and "ec2:DescribeInstances" made this work for me.
The above answer might be only applicable when you control your access via Roles with Policies.

Message: user order resource type [classic] not exists in [random] when trying to RunInstances on command line

When trying to create an ECS instance via cli tools I get the error in subject and I can't find what it means. For example:
$ ./aliyun ecs RunInstances --Amount 1 --ImageId m-0xidtg6bbw1s8voux52d --InstanceType ecs.n1.medium --InstanceName Composer-Test-VM-1 --SecurityGroupId sg-0xi4w9isg0p1ytj1qbhf
ERROR: SDK.ServerError
ErrorCode: InvalidResourceType.NotSupported
Recommend:
RequestId: 1B3E65BD-D181-4552-9A58-599FC51924A7
Message: user order resource type [classic] not exists in [random]
I have credentials configured in ~/.aliyun/config.json.
The default region in config is us-east-1, the ImageId and SecurityGroupId are both in the same region.
I tried a few other instance types and either I get the same error message or [classic] is replaced by the prefix of the instance type. This leads me to think I can't create virtual machines from some of these instance types in my region but I have no idea why.
Does anyone know what is causing this specific error or where to find more documentation about it ?
I have found the culprit here. Although not stated (e.g. in --help) the --VSwitchId option is mandatory when specifying a --SecurityGroupId. The VSwitch needs to be in the same availability zone as your security group.
On this link, check out the following documentation under "Description":
For network configuration:
To create an instance in a VPC, you must specify a VPC and a VSwitch. One instance can belong only to one VSwitch.
When you specify VSwitchId, ensure that the security group and VSwitch specified by SecurityGroupId and VSwitchId belong to the same VPC.
If you specify both VSwitchId and PrivateIpAddress, ensure that the private IP address specified by PrivateIpAddress is within the CIDR block of the VSwitch.
PrivateIpAddress is dependent on VSwitchId. You cannot only specify the PrivateIpAddress parameter.
Also Note: The Alibaba Cloud product APIs are divided into RPC API and RESTful API. Most products use RPC style. When you use Alibaba Cloud CLI to call the interface, APIs of different styles have different calling methods.
Check out the following link: https://www.alibabacloud.com/help/doc-detail/110344.htm
Hope this helps!

Concurrent az login executions

I am using the Azure CLI to perform a health check on some Azure VMs. The health checks are deployed through a Jenkins stage, using bash. The stage itself may take several hours to complete, during which, several az 'vm run-commands' are executed that all require the proper credentials.
I also have several Jenkins pipelines that deploy different products and that are supposed to be able to run in parallel. All of them have the same health checks stage.
When I execute 'az login' to generate an auth token and 'az account set' to set the subscription, as far as I understood, this data is written to a profile file (~/.azure/azureProfile.json). So this is well and all, but whenever I trigger a parallel pipeline on this Jenkins container, if I use a different Azure subscription, the profile file will naturally get overwritten with the different credentials, which causes the other health check to fail whenever it gets to the next vm run-command execution since it's looking for a Resource Group, which exists in a different subscription.
I was thinking of potentially creating a new unique Linux user as part of each stage run and then removing it once it's done, so all pipelines will have separate profile files. This is a bit tricky though, since this is a Jenkins docker container using an alpine image and I would need to create the users with each pipeline rather than in the dockerfile, which brings me to a whole other drama - to give the Jenkins user sufficient privileges to create and delete users and so on...
Also, since the session credentials are stored in the ~/.azure/accessTokens.json and azureProfile.json files by default, I could theoretically generate a different directory for each execution, but I couldn't find a way to alter those default files/location in the Azure docs.
How do you think is the best/easier approach to workaround this?
Setting the AZURE_CONFIG_DIR environment variable does the trick as described here.
I would try to keep az login as it is, remove az account set and use --subscription argument for each command instead.
You can see that ~/.azure/azureProfile.json contains tenantId and user information for each subscription and ~/.azure/accessTokens.json contains all tokens.
So, if you precise each time your subscription explicitly you will not depend on common user context.
I have my Account 1 for subscription xxxx-xxxx-xxxxx-xxxx, and Account 2 for subscription yyyy-yyyy-yyyy-yyyy and I do:
az login # Account 1
az login # Account 2
az group list --subscription "xxxx-xxxx-xxxxx-xxxx"
az group list --subscription "yyyy-yyyy-yyyy-yyyy"
and it works well under the same unix user

How to invoke step function from a lambda which is inside a vpc?

I am trying to invoke a step function from a lambda which is inside a VPC.
I get exception that HTTP request timed out.
Is it possible to access step function from a lambda in a vpc?
Thanks,
If your lambda function is running inside a VPC, you need to add a VPC endpoint for step functions.
In the VPC console : Endpoints : Create Endpoint, the service name for step functions is com.amazonaws.us-east-1.states (the region name may vary).
Took me a while to find this in the documentation.
It is possible but depends on how you are trying to access step functions. If you are using the AWS SDK then it should take care of any http security issues, otherwise if you are executing raw HTTP commands you will need to mess around with AWS headers.
The other thing you will need to look at is the role that lambda is executing. Without seeing how you have things configure I can only suggest to you things I encountered; you may need to adjust your policies so the role can have the action: sts:AssumeRole, another possibility is adding the action: iam:PassRole to the same execution role.
The easiest solution is to grant your execution role administrator privileges, test it out then work backwards to lock down your role access. Remember to treat your lambda function like another API user account and set privileges appropriately.

How to delete all security groups on Amazon ec2?

I've created new EC2 spot requests over the last weeks. A new security group was created for every request. When the spot requests were deleted the security groups were not deleted. I've hit the 100 groups limit and want to delete them. The EC2 interface apparently allows only one deletion at a time, which means I would have to make 300 clicks to delete these groups. Or is there a better way to delete multiple security groups with few clicks or lines of code?
THis would need some basic scripting and AWS SDK. you can do this with pretty much all the SDK provided by AWS.
I would prefer AWS-CLI as i already have it installed and configured. This is what I would do:
list all the SGs with describe-security-groups
Install jq (the Json parser for BASH)
Pull the SG IDs (check this for jq syntax)
Once you have the SG IDs, run delete-security-group by usig a for loop.
This is fairly simple and straight forward way of doing wat you want to do. THis can be done by any of the AWS SDKs.
These are just a couple of commands which can be constructed into a Bash script, provided:
You have aws-cli installed and configured
you have jq installed on your system.
If you already have some other AWS SDK installed, then you are better off with that as java/python/ruby...etc all have their own inbuilt way of parsing JSON/HASH/DataStructure.
Hope this helps.
I think you can do this by combining a command that lists all security groups and one other that deletes them.
If you are using the python boto API (for example) that would be:
import boto
conn = boto.connect_ec2(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
groups = conn.get_all_security_groups()
Which returns (as an example): [SecurityGroup:appserver, SecurityGroup:default, SecurityGroup:vnc, SecurityGroup:webserver]
And then you delete them all:
for group in groups:
conn.delete_security_group(group.split(":")[1])
Edit
You should run the commands on your shell.
These solutions only work if you don't have rules in other security groups that reference the security groups in question. I have a couple scripts that will delete a single security group, including the ingress rules in other security groups. I also handle the special case of ingress rules referencing the AWS ELB default security group. If you have this more complex situation, the solutions above won't delete your security group because of these other rules. My scripts are here (one for ec2-classic and one for VPC based security groups): https://gist.github.com/arpcefxl/2acd7d873b95dbebcd42
private static void delete(List<String> sgs) {
AmazonEC2Client ec2 = new AmazonEC2Client(Credentials.getCredentialsProvider());
ec2.setEndpoint("ec2.us-west-2.amazonaws.com"); // default
for(String sg:sgs) {
System.out.println("DELETING SECURITY GROUP " + sg);
DeleteSecurityGroupRequest delReq = new DeleteSecurityGroupRequest().withGroupName(sg);
try {
ec2.deleteSecurityGroup(delReq);
} catch (Exception e) {
// e.printStackTrace();
}
}
}

Resources