Executing a Unix Shell Script in AWS or GCP Environment - bash

I have developed a shell script which connects to the AWS EKS Cluster, Execute a kubectl command, get the result as JSON string and insert into AWS RDS MySQL DB. Tested the script in AWS Cloud Shell which is fine there. Now, I want to schedule the script (using CRON Scheduler may be ) every 30 minutes. I am exploring options on where to schedule the Unix Shell Job. Below are the options I can see from web :-
From an AWS EC2 Instance
Using AWS Systems Manager
For #1, I can start an EC2 instance free tier and schedule the cron job.
For #2, not very sure on how AWS Systems Manager works.
Is there any other approach to schedule a shell job in AWS or GCP ?

Related

Modify user_data after stopping aws EC2 with ansible playbook(ec2 or ec2_instance module)

I have a EC2 instance that's already launched using ansible ec2 module having user_data(say data1). Stopped the EC2 instance, now I want to modify the user_data(say data2) and start the instance. Giving modified user_data but its not getting reflected on aws.
To summarize, How to modify user_data of stopped aws EC2 using ansible script(with ec2 or ec2_instance) script.
By default, user data scripts and cloud-init directives run only during the boot cycle when you first launch an instance. You can update your configuration to ensure that your user data scripts and cloud-init directives run every time you restart your instance.
User Data for every restart

Start AWS EC2 instance, run commands, stream logs to console and terminate

Trying to run few steps of CI/CD in a EC2 instance. Please don't ask for reasons.
Need to:
1) Start an instance using AWS CLI. Set few environment variables.
2) Run few bash commands.
3) Stream the command from the above commands into the console of the caller script.
4) If any of the commands fail, need to fail the calling script as well.
5) Terminate the instance.
There is a SO thread which indicates that streaming the output is not as easy. [1]
What I would do, if I had to implement this task:
Start the instance using the cli command aws ec2 run-instances and using an AMI which has the AWS SSM agent preinstalled. [2]
Run your commands using AWS SSM. [3] This has the benefit that you can run any number of commands you want - whenever you want (i.e. the commands must not be specified at instance launch, but can be chosen afterwards). You also get the status code of each command.[4]
Use the CloudWatch integration in SSM to stream your command output to CloudWatch logs. [5]
Stream the logs from CloudWatch to your own instance. [6]
Note: Instead of streaming the command output via CloudWatch, you could also periodically poll the SSM API by using aws ssm get-command-invocation. [7]
Reference
[1] How to check whether my user data passing to EC2 instance working or not?
[2] Working with SSM Agent - AWS Systems Manager
[3] Walkthrough: Use the AWS CLI with Run Command - AWS Systems Manager
[4] Understanding Command Statuses - AWS Systems Manager
[5] Streaming AWS Systems Manager Run Command output to Amazon CloudWatch Logs | AWS Management Tools Blog
[6] how to view aws log real time (like tail -f)
[7] get-command-invocation — AWS CLI 1.16.200 Command Reference
Approach 1.
Start an instance using AWS CLI.
aws ec2 start-instances --instance-ids i-1234567890abcdef0
Set few environment variables.
Use user dat of ec2 to set env. & run commands
..
Run your other logic / scripts
To terminate the instance run below command in the same instance.
instanceid=`curl http://169.254.169.254/latest/meta-data/instance-id`
aws ec2 terminate-instances --instance-ids $instanceid
Approach 2.
Use python boto3 or kitchen chef ci.

Shell-Install one script into group of servers

i have a shell script which need to be installed over 100 Ubuntu instances/servers.What is the best way to install the same script on all instance without logging into each one.
You can use AWS System Manager , according to AWS Documentation :
You can send commands to tens, hundreds, or thousands of instances by
using the targets parameter (the Select Targets by Specifying a
Tag option in the Amazon EC2 console). The targets parameter accepts
a Key,Value combination based on Amazon EC2 tags that you specified
for your instances. When you execute the command, the system locates
and attempts to run the command on all instances that match the
specified tags
You can Target Instance by tag :
aws ssm send-command --document-name name --targets Key=tag:tag_name,Values=tag_value [...]
or
Targeting Instance IDs:
aws ssm send-command --document-name name --targets Key=instanceids,Values=ID1,ID2,ID3 [...]
Read the AWS Documentation for Details.
Thanks
You have several different options when trying to accomplish this task.
Like Kush mentioned, AWS System manager is great, but is a tightly coupled AWS service.
Packer - You could use Packer to create an AMI of the servers, and have the script installed on them, or just executed whatever the script is doing.
Configuration Management.
Ansible/Puppet/Chef. - These tools allow you to manage thousands of servers with only a couple of commands. My preference would be for Ansible, it is light weight, the syntax is only yaml, connects over ssh, and still allows use of placing shell scripts, if need be.

instructions/manual on auto launch/shutdown on EC2

need pretty trivial task
i have server, which in crontab every night will run "something" what will launch new EC2 instance, deploy there code (ruby script), run it, upon completion of the script shutdown the instance.
how to do it the best?
thanks.
Here's an approach that can accomplish this without any external computer/cron job:
EC2 AutoScaling supports schedules for running instances. You could use this to start an instance at a particular time each night.
The instance could be of an AMI that has a startup script that does the setup and running of the job. Or, you could specify a user-data script be passed to the instance that does this job for you.
The script could terminate the instance when it has completed running.
If you are running EBS boot instance, then shutdown -h now in your script will terminate the instance if you specify instance-initiated-shutdown-behavior of terminate.

unable to auto start/stop aws ec2 instance

i wanted to automate the ec2 instance's start & stop and configured the crontab on an instance x. I followed these steps
1) Edited the crontab -e of instance X.
2) and added these lines
15 04 * * * username ec2-start-instances i-f1814c90
15 07 * * * username ec2-stop-instances i-f1814c90
10 10 * * * username ec2-start-instances i-f1814c90
3) and restarted the cron using sudo /etc/init.d/cron restart
But still am unable to either start or stop the ec2 instance using cronjob.
thanks,
Anand
Most likely the issue is that the AWS data need to run the ec2 start and stop commands are not in the cron environment.
Its better to write a separate script that does this, instead of of making the ec2 commands on the cron like that.
This is why AWS Data Pipeline is for (working fine):
https://aws.amazon.com/premiumsupport/knowledge-center/stop-start-ec2-instances/
Just mind the trap: --region eu-west-1 NOT --region eu-west-1a (which is an availability zone).
I'd suggest to Schedule EC2 Start / Stop using AWS Lambda
You don't need anything more than a small script or two that you schedule. No instance to launch, just a quick invocation of the script you have built. Pick the programming language of your choice and use the AWS SDK to perform instance operations. A quite lightweight solution,.

Resources