I have an existing Windows EC2 instance and I'd like to enable custom metrics to Cloudwatch and forward logs to Cloudwatch Logs. I understand that I need to install EC2Config agent to do this. Since this is an already provisioned instance I'm unable to use an IAM role for passing credentials. Will I be able to use an IAM User with the correct policy to do this i.e can I hardcode the access key and secret key in EC2config somewhere?
Also for enabling Cloudwatch Custom Metrics + Logs is it simply a tick box that enables it?
Will EC2Config have any undesired impact on the OS, I can see many options around password changes and formatting EBS volumes - I assume if i leave those options alone it enable itself, since I'm only interested in forwarding logs to cloudwatch.
Thanks
You asked several questions, i will try to address them.
You cannot assign an IAM role to an instance after it has been created.
I would try to install the AWS cli tools and use aws configure to check if the EC2Config agent can use it
The agent sends the logs to CloudWatch. As it is an Agent running it does consume system resources but it should be minimal.
Related
I want to get logs from ec2 machines using some agent. I don't want to use cloudwatch for this. Is there any solution for this?
I have created a windows instance and got .rdp file. How can i get the access logs i.e WHO and WHEN the instance is logged into with this .rdp file. Also, how much time was it used. Need help with the approaches of how to achieve this.
that possible completely to get each events log using Cloudwatch log Agent and system manager service which work on SSM Agents but this will work for Instances have Outbound Access to send logs to Cloudwatch. and the best part is AWS have amazing documentation for the same to setup a Cloudwatch for Windows Instances . Please have a look on this Windows logs with Cloudwatch
I have myself setup it for Windows server as per the need we can be flexible. let me know if you stuck while following this document.
This is one of the best approach .
I am deploying my microservice into an EC2 instance via Mesos. The problem is I am sharing my EC2 instance with other team's microservices. All these microservices deal with difference S3 buckets and we dont want other guys to have access to our buckets. I need to assign IAM role to my container so that only I can access my S3 bucket via microservices deployed in EC2 instance.
We are not using ECS and we deploy using Mesos. Any input or comment is appreciated. Thanks in advance.
There is no native AWS support for this. In the meantime you can use Lyft's metadataproxy (see also the blog post).
Quoting the blog:
We had an idea to build a web service that proxies calls to the metadata service on http://169.254.169.254 and pass through most of the calls to the real metadata service, but capture calls to the IAM endpoints. By capturing the IAM endpoints we can decide which IAM credentials we’ll hand back.
...
To know which IAM roles should be assumed, the metadataproxy has access to the docker socket. When it gets a request, it looks up the container, based on its request IP, finds that container’s environment variables and uses the value of the IAM_ROLE environment variable as the role to assume. It then uses STS to assume the role, caches the credentials in memory (for further requests) and returns them back to the caller. If the credentials cached in memory are set to expire, the proxy will re-assume the credentials.
I am running into a security related issue with AWS lambda and not sure what is the right way to resolve this.
Consider an EC2 instance A accessing the database on another EC2 instance B. If I want to restrict the accessibility of the DB on instance B to instance A only, I would modify the security group and add a custom TCP rule to allow access to only the public IP of instance A. So, this way, AWS will take care of everything and the DB server will not be accessible from any other IP address.
Now let us replace instance A by a lambda function. Since it is no longer an instance, there is no definite IP address. So, how do I restrict access to only the lambda function and block any other traffic ?
Have the Lambda job determine its IP, and dynamically update the instance B security group, then reset the security group when done.
Until there is support for Lambda running within a VPC this is the only option. Support for that has been announced for later this year. The following quote is from the referenced link above.
Many AWS customers host microservices within a Amazon Virtual Private
Cloud and would like to be able to access them from their Lambda
functions. Perhaps they run a MongoDB cluster with lookup data, or
want to use Amazon ElastiCache as a stateful store for Lambda
functions, but don’t want to expose these resources to the Internet.
You will soon be able to access resources of this type by setting up
one or more security groups within the target VPC, configure them to
accept inbound traffic from Lambda, and attach them to the target VPC
subnets. Then you will need to specify the VPC, the subnets, and the
security groups when your create your Lambda function (you can also
add them to an existing function). You’ll also need to give your
function permission (via its IAM role) to access a couple of EC2
functions related to Elastic Networking.
This feature will be available later this year. I’ll have more info
(and a walk-through) when we launch it.
I believe the below link will explain lambda permission model for you.
http://docs.aws.amazon.com/lambda/latest/dg/intro-permission-model.html
I am trying my hand at autoscalling and all is well except that I need all of my instances to be assigned an elastic ip (this is for my payment gateway which needs to know all IPs that we are using.)
Im happy to add say 8 elastic ips to my account but what I need is a facility to auto assign one of these to the instance as it boots up and then release it as it switches off.
I guess I need a startup script but this is beyond my knowledge of AWS (so far I do everything through the web console).
Any samples/help appreciated!
If your gateway is deployed in the same Amazon account as your servers, you might want to look at a VPC solution where you can control the instances' private IPs using masks.
If that is not an option, you will need to write a script, which you should add to the Launch Configuration's User Data.
In this script you can use AWS CLI to find which IP Addresses are available using describe-addresses, and use one of them to associate to your newly created instance using associate-address.