I would like to create an monitoring instance with rights to terminate, create, destroy instances, autoscaling groups, tags etc. within the scope of cloudformation it was created in.
What resource should I give to the policy to make it work ?
{
"PolicyName": "ManageCloudformationInstances",
"PolicyDocument": {
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:*"
],
"Resource": "?????"
}
]
}
},
So I guess there are two part to your question.
If you are creating instance in your cloudformation template then you can easily just use the GetAtt function to pull the Arn for those resources.
However if you are trying to dynamically allow it to delete instances inside of an autoscaling group then you need dynamically edit your policy to allow that. The easiest way that comes to mind is to trigger a lambda function every time that your ASG scales and edit the policy to include the ARNs from the recent scaling activity.
You probably want to start with something like this for the ASG - http://docs.aws.amazon.com/autoscaling/latest/userguide/cloud-watch-events.html
Related
How to prevent users who have access to kibana dev tools, from making any inadvertent changes , updates or deletes in a particular index.Basically what I am looking for is, some kind of authorisation for a particular index, so that only authorised users can be given R/W access and any other users should have only R permission.
You can define privileges like read, write, delete etc. in user roles. Privileges can be categorized into cluster- and index-privileges as documented on this page:
https://www.elastic.co/guide/en/elasticsearch/reference/current/security-privileges.html
The index-privileges are what you're looking for.
After creating the roles (e.g. one for read-write and one for read-only), you simply need to add the particular users to these roles. Elasticsearch will then check the user's privileges on every action they try to execute and prevent them if needed. This is done via the has_privileges API Elasticsearch internally uses.
Here's a guide on how to define roles:
https://www.elastic.co/guide/en/elasticsearch/reference/current/defining-roles.html
Here are some further resources related to that topic:
https://www.elastic.co/guide/en/elasticsearch/reference/current/authorization.html
https://www.elastic.co/guide/en/kibana/current/development-security-rbac.html
I hope I could help you.
Alternatively, using IP-based Access Policy, you can allow or deny Actions (es:ESHttpDelete, es:ESHttpGet, es:ESHttpHead, es:ESHttpPost, es:ESHttpPut, es:ESHttpPatch). Especially if you don't want to permanently turn on Fine-tuned Access Policies that utilize roles, for whatever reason. Although I do like the fine-tuned capability to limit access to fields.
An IP-based access policy example, altered from ES Identity and Access Management
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": [
"es:ESHttpDelete"
],
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"192.0.2.0/24"
]
}
},
"Resource": "arn:aws:es:us-west-1:987654321098:domain/test-domain/*"
}
]
}
I can select an event from event template when I trigger a lambda function. How can I create a customized event template in terraform. I want to make it easier for developers to trigger the lambda by selecting this customized event template from the list.
I'd like to add an event on this list:
Unfortunately, at the time of this answer (2020-02-21), there is no way to accomplish this via the APIs that AWS provides. Ergo, the terraform provider does not have the ability to accomplish this (it's limited to what's available in the APIs).
I also have wanted to be able to configure test events via terraform.
A couple of options
Propose to AWS that they expose some APIs for managing test events. This would give contributors to the AWS terraform provider the opportunity to add this resource.
Provide the developers with a PostMan collection, set of shell scripts (or other scripts) using the awscli, or some other mechanism to invoke the lambdas. This is essentially the same as pulling the templating functionality out of the console and into your own tooling.
I did try something and it worked. I must warn that this is reverse engineering and may break anytime in future, but works well for me so far.
As per Amazon doc for testing lambda functions, whenever a Shareable Test Event is created for any lambda, it is stored under a new schema under the lambda-testevent-schemas schema registry.
I made use of this information and figured out the conventions AWS follows to keep track of the events so that I can use those to manage resources using terraform
The name of the schema is _<name_of_lambda_function>-schema, hence, from terraform I manage a new schema named _<name_of_lambda_function>-schema
resource "aws_schemas_schema" "my_lambda_shared_events" {
name = "_${aws_lambda_function.my_lambda.function_name}-schema"
registry_name = "lambda-testevent-schemas"
type = "OpenApi3"
description = "The schema definition for shared test events"
content = local.my_lambda_shared_events_schema
}
I create a json doc (my_lambda_shared_events_schema) which follows the OpenAPI3 convention. For example =>
{
"components": {
"examples": {
"name_of_the_event_1": {
"value": {
... the value you need ...
}
},
"name_of_the_event_2": {
"value": {
... the value you need ...
}
}
},
"schemas": {
"Event": {
"properties": {
... structure of the event you need ...
},
"required": [... any required params ...],
"type": "object"
}
}
},
"info": {
"title": "Event",
"version": "1.0.0"
},
"openapi": "3.0.0",
"paths": {}
}
After terraform apply, you should be able to see the terraform managed shareable events in AWS console.
Some important gotchas when using this method:
If you add events using terraform using this method, any events added from the console would be lost.
The schema registry lambda-testevent-schemas is a special registry & must NOT be managed using terraform as it may disrupt other lambda functions' events created outside the scope of this terraform module.
It is required for the lambda-testevent-schemas to be available beforehand. You can either have a check to create this registry before the
module is applied or else create any random shareable event for any random lambda function. This needs to be done once per region per account.
If you face difficulties creating the json schema for your lambda, you can for once create the events from console and then copy the json from EventBridge schema registry.
I think I have a pretty common scenario but I couldn't find a condensed guide on how to achieve my goal. I am developing an Alexa Skill and have the following:
Lambda function: arn:aws:lambda:us-west-2:123456789012:function:lambda1
Currently the Alexa Skill is working fine with the Lambda doing a simple task. The Lambda's Execution Role is:
arn:aws:iam::123456789012:role/service-role/lambda1-role-rp2z9bjn
I now want to add DynamoDB to my Lambda (Node.js). I created the following:
IAM user: arn:aws:iam::123456789012:user/user1
DynamoDB table: arn:aws:dynamodb:us-west-2:123456789012:table/T1
How do I now hook up everything together, that is, to let the Lambda function perform read/write operations on T1? Is the IAM user even necessary?
You do not need the IAM User.
Instead, add DynamoDB permissions to the IAM Role being used by the Lambda function. For example:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "dynamodb:*",
"Resource": "arn:aws:dynamodb:us-west-2:123456789012:table/T1"
}
]
}
The Lambda function will then be permitted to use the table.
I have created an IAM user in my AWS account for someone to assist me in a project.
How do I grant full access for this user to one specified Lambda function?
By full access, I mean he has the same rights as me (root account) regarding this one function, and not do other things like creating new functions or viewing other functions.
First, be careful what you mean by "full access". This would include the ability to delete the function, which is probably not something you'd like to allow.
Take a look at: Actions, Resources, and Condition Keys for AWS Lambda - AWS Identity and Access Management
It lists all Lambda-related actions. You'll notice that many of the entries have function listed in the Resource Types column. This means you can limit the permission being granted to only the stated function(s). So, you'll probably want to limit permission only to those actions that can be restricted by function.
The result would be a policy similar to:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"lambda:InvokeAsync",
"lambda:InvokeFunction",
"lambda:UpdateFunctionCode",
"lambda:UpdateFunctionConfiguration"
],
"Resource": "arn:aws:lambda:ap-southeast-2:123456789012:function:my-function"
}
]
}
I have an Ansible role to handle creation of an RDS instance and the databases on that instance. The role allows a security group to be specified for the database. I want to be able to add a rule to the security group at the beginning of the role that allows access from the current host so that Ansible can run some database creation/maintenance tasks. I then want to remove this rule from the security group while maintaining the existing groups.
What I've done so far is used the ec2_group_facts module to get information about the given security group which I save in the security_group variable. I then add a rule with a task similar to the following:
- name: Add hole to security group
local_action:
module: ec2_group
name: "{{ security_group.group_name }}"
purge_rules: no
rules:
- proto: tcp
from_port: "{{ db_port }}"
to_port: "{{ db_port }}"
cidr_ip: 0.0.0.0/0
This all works properly. The issue is that at the end of the role, when I want to restore the existing rules, the format of the rules returned by ec2_group_facts is not accepted by the ec2_group module. The information saved about security_group is in the following format:
{
"group_id": "sg-1234abcd",
"group_name": "security-group",
"ip_permissions": [
{
"from_port": 1234,
"ip_protocol": "tcp",
"ip_ranges": [
{
"cidr_ip": "0.0.0.0/0"
}
],
"ipv6_ranges": [],
"prefix_list_ids": [],
"to_port": 1234,
"user_id_group_pairs": []
}
],
"ip_permissions_egress": [],
"owner_id": "123456789012",
"tags": {
"Name": ""
},
"vpc_id": "vpc-1234abcd"
}
The rules argument of the ec2_group module needs a list of objects with proto, from_port, to_port, and cidr_ip attributes, so how would I map the data above to the required format?
Edit: I guess one solution would be to add a temporary security group that allows access from the current host. If my understanding of EC2 security groups is correct, the most permissive rule of the security groups associated with an instance is applied so this would achieve what I want. However this would require editing the security groups attached to an existing RDS instance, so I would prefer to edit the rules of an existing security group if possible.
Edit 2: Travis CI publishes the IP addresses used to run builds. I could just add these to the security group permanently, although I'm not sure what the security implications of this are.
ec2_group docs
ec2_group_facts docs
When running playbooks you want a consistent state and from the sounds of things you don't have a consistent state throughout your play.
I would suggest that the additional task that you would like to perform on the database could be run from some other instance that is more trusted (perhaps the place you're running ansible from?).
Consider what will happen if a playbook is run twice at the same time. Perhaps this isn't something your workflow allows for but you should still consider this case.
If this isn't an option or you would rather not change your implementation then your edit's suggestion sounds more suitable. Apply your standard rules and when required add to your rules (or create a new security group for this purpose) and then destroy or modify when no longer required.