Shell script to automate the checklist of the AWS environment - bash

I have created an environment in AWS. The environment has networking (VPC), EC2 instances, RDS (MySQL), Redis, ALB, S3, etc.
Now I want to have a shell script (bash) that will show the
EC2 instances - instance types, IPs, termination protection, etc.
Networking - VPC and subnet CIDRs, DNS hostnames, DNS hostnames - enable or disable
S3- Details like policy, bucket name, Default encryption, Default encryption, Replication rules, etc.
RDS - ARN, end points, reader and writer instances, version, etc.
Redis - version, node type, shards, total nodes, etc.
ALB - DNS name, listeners, etc.
and need to have all these in a file as output.
Note: I have to give only the AWS account number, region, and tags as input.
FYI, the above input values have to be taken from JSON or any CSV file.
Can you please help me?
I tried some scripts, but they were not able to work properly.
Currently, I am manually updating and checking everything.
Note: I have this environment that got created through Terraform that contains networking, bastion, the backend, a worker node, RDS, S3, and ALB. Now I want to validate these all as part of a checklist through automation. that I require in the form of a shell script with PASS or FAIL.

For these stuff IAC (Infrastructure As Code) tools such as Terraform are invented.
You can write down the specifics for your cloud resources (such as s3, lambda etc.) and can manage version, config, backend based on your environment settings.
Here are some common aws services written in terraform you can look as reference to start with terraform.
We use terraform.env.tfvars to pass environment specific variables. And automate the whole thing using some bash scripts. The reference repo is actually a project from which you can get ideas of how it's done.
Best wishes.

Related

Monitoring EBS volumes for istances with CloudWatch Agent and CDK

I'm trying to set up a way to monitor disk usage for instances belonging to an AutoScaling Group, and add an alarm when the volumes associated to the instances are almost full.
Since it seems there are no metrics normally offered by Amazon to do that, I resorted using the CloudWatch Agent to get what I wanted. So far so good, I can create graphs and alarms for the metrics I want using the CloudWatch console.
My issue is how to automate everything with CDK. How can I automate the creation of the metric for each instance, without knowing the instance id beforehand? Is there a solution for this issue?
You can install and config CloudWatch agent via EC2 user data and the auto scaling group uses launch template to launch EC2 instance. All of those things can be done by AWS CDK.
There is an example from this open source project for your reference.
Another approach you could take is using AWS Systems Manager. Essentially, you install an SSM agent for your instances, and create an SSM Document (think Shell/Python script) that will run your setup script/automation.
You then create a State Manager Association, tying the SSM Document with your instances based on EC2 tags e.g. Application=MyApp or Team=MyTeam. This way, you don't have to provide any resource ids, just the tag key value pair which could extend multiple instances and future instance replacements. You can schedule it to run at specific times (cron) or at a certain frequency (rate) to enforce state.

Amazon’s AWS ability to perform a firewall rule, based on the source IP ports

Microsoft Azure has security groups, which allow filtering on the source IP ports. Does Amazon’s AWS have the same feature, to program from the command line?
Not only does it have it, it is also called the same. It can be accessed by downloading the AWS CLI. Most of the commands relating to security groups will be in aws ec2, for example aws ec2 describe-security-groups.
Some of the commands in pertaining to security groups can be fairly confusing so you might want to look at the GUI before your first time, and reading the docs will be helpful (as it usually is).

Register consul node_meta from ec2 tags

On each of our EC2 instances, we define two tags (Name and Cluster). Is it possible to populate the node_meta of a consul agent running on the instance from the values of these tags?
In the absence of any other obvious way to do this, I've written the following Python script to interrogate EC2 metadata and output a consul config file.
https://github.com/crooks/make_consul_config
The lack of any way to do this from within consul, (despite the capability to read tags for auto-joining), leads me to wonder if there's a good reason for not doing it. Opinions will be very gratefully received.

Monitoring instances in cloud

I usually use Munin as monitoring software, but this (as others software I presume) needs an IP to make the ICMP or whatever pings to collect data.
In Amazon EC2 instances are created on the fly, with IP's you don't know.
How can they be monitored ?
I was thinking about using amazon console commands to read the IP's of the instances up, and change the monit configuration file on the fly also , but it can be too complicated ... or not?
Any other solution / suggestion ?
Thank you
I use revealcloud to monitor my amazon instances. You can install it once and create an ami from that systen, or bootstrap the install command if that's your method. Since the install is just one command, it's easy enough to put into the rc.local (or similar). You can then see all the instances in the dashboard or topiew as soon as they boot up.
Our instances are bootstrapped using chef recipes, so it's easier for me to provide IPs/hosts as they (= all members of my cluster) get entered into /etc/hosts on start-up. Generally, it doesn't hurt to use elastic IPs for a master server and allow all connections (in /etc/munin/munin.conf by default).
I'd solve the security 'question' on the security groups level. E.g. allow only instances with a certain security group to connect to the munin-node process (on port 4949). The question which remains is.
E.g., using ec2-authorize you can achieve
ec2-authorize mygroup -o monitorgroup -u <AWS-USER-ID>
This means that all instances with group monitorgroup can access resources on instances with mygroup.
Let me know if this helps!
If your Munin master and nodes are all hosted on EC2 than it's better to use internal hosts like domU-00-00-00-00-00-00.compute-1.internal. because this way you don't have to deal with IP addresses and security groups.
You also have to set this in /etc/munin/munin-node.conf:
allow ^.*$
You can read more about it in Monitoring AWS Ubuntu Instances using Munin
But if your Munin master is not on EC2 your best bet is to attach Elastic IP to your EC2 instance.

How to sync my EC2 instance when autoscaling

When autoscaling my EC2 instances for application, what is the best way to keep every instances in sync?
For example, there are custom settings and application files like below...
Apache httpd.conf
php.ini
PHP source for my application
To get my autoscaling working, all of these must be configured same in each EC2 instances, and I want to know the best practice to sync these elements.
You could use a private AMI which contains scripts that install software or checkout the code from SVN, etc.. The second possibility to use a deployment framework like chef or puppet.
The way this works with Amazon EC2 is that you can pass user-data to each instance -- generally a script of some sort to run commands, e.g. for bootstrapping. As far as I can see CreateLaunchConfiguration allows you to define that as well.
If running this yourself is too much of an obstacle, I'd recommend a service like:
scalarium
rightscale
scalr (also opensource)
They all offer some form of scaling.
HTH

Resources