OpenStack VM creating Using Alerts from Splunk - ansible

As per my understanding, in AWS, we can combine AWS CloudWatch and AWS Elastic Beanstalk for the automation of VM creation. For example, We can configure CloudWatch to trigger an alert for certain condition and depending on that we can create/alter a VM. Is there a way to do the same with OpenStack using Terraform scripts?
Currently, we are creating and managing OpenStack VM's using terraform and ansible scripts. We have Splunk for dashboard and alerts. Is there a way to execute terraform scripts for VM's as we get an alert from Splunk? Please correct me if my understanding is wrong.

Is there a way to execute terraform scripts for VM's as we get an alert from Splunk?
AWX (or its Tower friend) will trivially(?) do that, via /api/v2/job_templates/{id}/launch/, or if there needs to be some API massaging (either to keep the credentials out of Splunk or to reshape the webhook payload) then I would guess a lambda function could do that
I would guess that if you are using terraform to drive ansible (instead of the other way around) then you could use Atlantis or TerraHub in roughly the same manner

Related

Shell script to automate the checklist of the AWS environment

I have created an environment in AWS. The environment has networking (VPC), EC2 instances, RDS (MySQL), Redis, ALB, S3, etc.
Now I want to have a shell script (bash) that will show the
EC2 instances - instance types, IPs, termination protection, etc.
Networking - VPC and subnet CIDRs, DNS hostnames, DNS hostnames - enable or disable
S3- Details like policy, bucket name, Default encryption, Default encryption, Replication rules, etc.
RDS - ARN, end points, reader and writer instances, version, etc.
Redis - version, node type, shards, total nodes, etc.
ALB - DNS name, listeners, etc.
and need to have all these in a file as output.
Note: I have to give only the AWS account number, region, and tags as input.
FYI, the above input values have to be taken from JSON or any CSV file.
Can you please help me?
I tried some scripts, but they were not able to work properly.
Currently, I am manually updating and checking everything.
Note: I have this environment that got created through Terraform that contains networking, bastion, the backend, a worker node, RDS, S3, and ALB. Now I want to validate these all as part of a checklist through automation. that I require in the form of a shell script with PASS or FAIL.
For these stuff IAC (Infrastructure As Code) tools such as Terraform are invented.
You can write down the specifics for your cloud resources (such as s3, lambda etc.) and can manage version, config, backend based on your environment settings.
Here are some common aws services written in terraform you can look as reference to start with terraform.
We use terraform.env.tfvars to pass environment specific variables. And automate the whole thing using some bash scripts. The reference repo is actually a project from which you can get ideas of how it's done.
Best wishes.

How to execute some script on the ec2 instance from the bitbucket-pipelines.yml?

I've bitbucket repository, bitbucket pipeline there and EC2 instance. EC2 have access to the repository ( can perform pull and docker build/run)
So it seems I only need to upload to EC2 some bash scripts and call it from bitbucket pipeline. How can I call it? Usually ssh connection is used to perform scripts on EC2, is it applicable from bitbucket pipeline? Is it a good solution?
two ways to solve this problem, I will leave it up to you.
I see you are using AWS, and AWS has a nice service called CodeDeploy. you can use that and create a few deployment scripts and then integrate it with your pipeline. Problem with it is that it is an agent that needs to be installed. so it will consume some resource not much but if u are looking at an agentless design then this solution wont work. you can check the example in the following answer https://stackoverflow.com/a/68933031/8248700
You can use something like Python Fabric (its a small gun) or Ansible (its a big canon) to achieve this. it is an agentless design works purely on SSH.
I'm using both the approaches for different scenarios. For AWS I use CodeDeploy and for any other cloud vendor I use Python Fabric. (We can use CodeDeploy on other than AWS but then it comes under on-premise pricing for which it charges for per deployment)
I hope this brings some clarity.

Monitoring EBS volumes for istances with CloudWatch Agent and CDK

I'm trying to set up a way to monitor disk usage for instances belonging to an AutoScaling Group, and add an alarm when the volumes associated to the instances are almost full.
Since it seems there are no metrics normally offered by Amazon to do that, I resorted using the CloudWatch Agent to get what I wanted. So far so good, I can create graphs and alarms for the metrics I want using the CloudWatch console.
My issue is how to automate everything with CDK. How can I automate the creation of the metric for each instance, without knowing the instance id beforehand? Is there a solution for this issue?
You can install and config CloudWatch agent via EC2 user data and the auto scaling group uses launch template to launch EC2 instance. All of those things can be done by AWS CDK.
There is an example from this open source project for your reference.
Another approach you could take is using AWS Systems Manager. Essentially, you install an SSM agent for your instances, and create an SSM Document (think Shell/Python script) that will run your setup script/automation.
You then create a State Manager Association, tying the SSM Document with your instances based on EC2 tags e.g. Application=MyApp or Team=MyTeam. This way, you don't have to provide any resource ids, just the tag key value pair which could extend multiple instances and future instance replacements. You can schedule it to run at specific times (cron) or at a certain frequency (rate) to enforce state.

How to run Ansible play-book command from remote server

I need to install and configure all new system start with auto-scaling in aws as per the requirements , like if it is a app server install nodejs with respective git code for deployment using with Ansible.
How Ansible identify a new system came up and need to do this all configuration.
Here is a guide from ansible docs how to handle autoscaling with Ansible: https://docs.ansible.com/ansible/latest/scenario_guides/guide_aws.html#autoscaling-with-ansible-pull
The problem on this approch is, that you need the whole provisining prozess on startup. This takes much time and is error prone.
A common solution is to build a custom AMI with all infrastructure needed for your service and only deploy your current code to this maschine.
A good tool to build custom AMIs is Packer. A Guide for AWS is available here. https://www.packer.io/docs/builders/amazon.html

How to poll AWS CLI in shell script?

As part of my CD pipeline in snap-ci.com, I want to start the instances in my AWS opsworks stack before deploying the application.
As starting hosts takes a certain amount of time (after the command has already returned), I need to poll for the instances to be running (using the describe-instances command in AWS CLI). This command does return a full JSON response of which one of the fields contains the status of the instance (e.g. "running").
I am new to shell scripting and AWS CLI and would appreciate some pointers. I am aware that I can also use the AWS SDK's to program it in java, but that would require to deploy that program to the snap-ci hosts first which sounds complex as well.
AWS CLI has support for wait commands, those will block and wait for the condition you specify, such as waiting for an instance to be ready.
The Advanced Usage of the AWS CLI talk from Re:Invent 2014 shows how to use waiters (18:55), queries, profiles and other tips for using CLI.

Resources