Shell-Install one script into group of servers - shell

i have a shell script which need to be installed over 100 Ubuntu instances/servers.What is the best way to install the same script on all instance without logging into each one.

You can use AWS System Manager , according to AWS Documentation :
You can send commands to tens, hundreds, or thousands of instances by
using the targets parameter (the Select Targets by Specifying a
Tag option in the Amazon EC2 console). The targets parameter accepts
a Key,Value combination based on Amazon EC2 tags that you specified
for your instances. When you execute the command, the system locates
and attempts to run the command on all instances that match the
specified tags
You can Target Instance by tag :
aws ssm send-command --document-name name --targets Key=tag:tag_name,Values=tag_value [...]
or
Targeting Instance IDs:
aws ssm send-command --document-name name --targets Key=instanceids,Values=ID1,ID2,ID3 [...]
Read the AWS Documentation for Details.
Thanks

You have several different options when trying to accomplish this task.
Like Kush mentioned, AWS System manager is great, but is a tightly coupled AWS service.
Packer - You could use Packer to create an AMI of the servers, and have the script installed on them, or just executed whatever the script is doing.
Configuration Management.
Ansible/Puppet/Chef. - These tools allow you to manage thousands of servers with only a couple of commands. My preference would be for Ansible, it is light weight, the syntax is only yaml, connects over ssh, and still allows use of placing shell scripts, if need be.

Related

How to invoke an EXE on EC2 Windows using Lambda/.Net Core

When file is uploaded to s3bucket, I need to invoke an executable on EC2 Instance. The executable will process a long job and invoke some command line executions
So, I want to run an EXE on EC2 Windows instance from AWS Lambda using .Net Core.
After some research, I figured out the prerequisites to do this
SSM Agent installed on EC2 instance
Create an IAM role for EC2:
AmazonSSMMamangementInstanceCore
IAM role for Lambda
AWSLambdaExecute
AmazonEC2ReadOnlyAccess
AmazonSSMFullAccess
AmazonS3FullAccess
Please advice me if there is any better approach to implement this.

How does one disable SourceDestCheck when creating instances with AWS CLI

It should be possible to disable SourceDestCheck since it is documented
"SourceDestCheck -> (boolean)"
but using run-instances with
aws ec2 run-instances ...
--SourceDestCheck false
or
--sourceDestCheck=false
Fails with
Unknown options: --SourceDestCheck, false
It seems I can run it later with a modify command
aws ec2 modify-instance-attribute --resource=$INSTANCE_ID --no-source-dest-check
but it should be possible to set that at instantiation. I just can't figure out the actual syntax.
I know this is old but I ran into the same issue today and solved it this way. In the resource block of your terraform file add:
provisioner "local-exec" {
command = "aws ec2 modify-instance-attribute --no-source-dest-check --instance-id ${self.id}"
}
assuming you have the was cli tools installed.
As far as I can tell, you can't set that on initial launch with the AWS CLI. It's not a supported option. You have to call aws ec2 modify-instance-attribute --no-source-dest-check documented here.
As #mark pointed out, this isn't an option in the RunInstances API. I just want to add that the SourceDestCheck in the AWS CLI doc you referenced is an output. If you look closely, it's an attribute of the ENI.

How to install script in existing HDI cluster on Azure using ARM template

I have created HDI cluster using ARM template. Now HDI cluster is running. Now I want to install my shell script on existing HDI cluster.
I see most of the examples has Installing HDIcluster + ScriptAction in the same template. Refer https://github.com/Azure/azure-quickstart-templates/blob/master/hdinsight-linux-run-script-action/azuredeploy.json
How do I install custom script on existing HDI cluster using ARM template?
An ARM template is for cluster creation or the addition of new nodes to the cluster.
You'll want to run your script action using PowerShell, AzureCLI, or the portal. Here is how you would do it in PowerShell:
# LOGIN TO ZURE
Login-AzureRmAccount
# PROVIDE VALUES FOR THESE VARIABLES
$clusterName = "<HDInsightClusterName>" # HDInsight cluster name
$saName = "<ScriptActionName>" # Name of the script action
$saURI = "<URI to the script>" # The URI where the script is located
$nodeTypes = "headnode", "workernode"
Submit-AzureRmHDInsightScriptAction -ClusterName $clusterName -Name $saName -Uri $saURI -NodeTypes $nodeTypes -PersistOnSucces
Resources:
https://learn.microsoft.com/en-us/azure/hdinsight/hdinsight-hadoop-customize-cluster-linux
As you mentioned, we can apply the script actions during cluster creation. But it is not supported to apply script actions on a running cluster from Azure Resource Manager Template currently. We can get more details about Script Actions from the document.
As Andrew Moll mentioned that we can use Powershell to add Script Actions on running cluster.
We can also can use REST API(linux cluster only) to do that easily.
If we have any idea about it, we can give our feedback to Azure team.

How to run a shell script on all amazon ec2 instances that are part of a autoscaling group?

Can anyone please tell me how to run a shell script on all the ec2 instances that are part of an auto scaling group?
The scenario is that I have a script that I want to run on many ec2 instances that are turned on automatically as part of auto scaling group. The native approach is to SSH to each instance and run the script. I am looking for a way by which it can run automatically on all the instances when I run it on one of the ec2 instance or any better way of doing this?
Thanks in advance.
You'll want to add that shell script to the userdata in a NEW Launch Config and then update the autoscaling group.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html#user-data-shell-scripts
Updating the Launch Config
if you want to change the launch configuration for your Auto Scaling group, you must create a launch configuration and then update your Auto Scaling group with the new launch configuration. When you change the launch configuration for your Auto Scaling group, any new instances are launched using the new configuration parameters, but existing instances are not affected.
https://docs.aws.amazon.com/autoscaling/latest/userguide/LaunchConfiguration.html
You can implement in many different ways...
Use awscli to get all instances in auto scaling group
aws autoscaling describe-auto-scaling-groups --auto-scaling-group-name myTestGroup
SSHkit is interesting tool run commands on remote servers
or
Invest your time to automate your infrastructure in proper tools like puppet, chef. With puppet mcollective you can do magic, including what you have asked about.
Update:
When you add instance to autoscaling group new tag name=aws:autoscaling:groupName, value=name_of_assigned_autoscaling_group is added, thus its easy to find it searching for this tag.
$ asgName=testASG
$ aws ec2 describe-instances --filters Name=tag-key,Values='aws:autoscaling:groupName' Name=tag-value,Values=$asgName --output text --query 'Reservations[*].Instances[*].[InstanceId,PublicIpAddress]'
The output you will get from the command above is the instance name and public IP:
i-4c42aabc 52.1.x.y
You can use this is your script...
I would do it using Chef. Opsworks (an aws product) is chef plus a lot of things that will do exactly what you want and give you even more flexibility.
Run Command perhaps?
You can use it to invoke scripts as well:
*Nice cheat: Select Run Command on the left-side menu of the EC2 section. Click the "Run Command" button. Then setup your AWS-RunShellScript. Type in your code. Then at the bottom there is a dropdown labelled: "AWS Command Line Interface command", select the correct platform and copy/paste the command into a script.
$Command_Id = aws ssm send-command --document-name "AWS-RunPowerShellScript" --targets '{\"Key\":\"tag:Name\",\"Values\":[\"RunningTests\"]}' --parameters '{\"commands\":[\"Start-Process \\\"C:\\\\path\\\\to\\\\scripts\\\\LOCAL_RUN.ps1\\\"\"]}' --comment "Run Tests Locally" --timeout-seconds 3800 --region us-east-1 --query 'Command.CommandId' --output text | Out-String
Per the question: Use your Auto Scaling Group Name instead of "RunningTests." Or, in the console: In your "Run Command" setup. Select the "Specifiying a Tag" radio button, then "Name" and your Auto Scaling Group.
*Note: The command above is windows powershell, but you can convert your script to Linux/OS or whatever by selecting the correct platform when in the Run Command setup.
**Note: Ensure your User on that instance has the AmazonSSMFullAccess permission setup to run the commands.
***Note:The SSM Agent comes installed on Windows Instances by default. If you are running Linux or something else, you might need to install the SSM Agent.
For a simplistic implementation you can use python fabric (http://www.fabfile.org/). It is a tool to run commands from a local or bashin instance to list of servers.
Here is a repo which has basic scaffolding and examples. Lot of CICD tools are having features to target this requirements, but I found fabric the easiest to implement fo simple setup.
https://github.com/techsemicolon/python-fabric-ec2-aws

What is the proper way of using the AWS CLI EC2 "wait" function?

I am using AWS opsworks for statically configuring a simple stack, consisting of two layers (Rails app server and mySQL db).
After having successfully configured and started the stack and deploying my app, I would like to automate the start activity for the stack as part of my pipeline. AWS CLI provides the features to start stacks, retrieve the instance id's of the individual servers and then poll AWS for the completion status ("instance-running"), using the EC2 wait command.
The script below is what I am using (the first command starts the stack, the second command retrieves the instance id's for the two hosts, the third initiates the wait command for these two servers):
#!/bin/bash
aws opsworks --region us-east-1 start-stack --stack-id 9e1b0534-5b38-4fa5-b30c-f849dda8f46b
instance_id=$(aws opsworks --region us-east-1 describe-instances --stack-id 9e1b0534-5b38-4fa5-b30c-f849dda8f46b --query "Instances[].Ec2InstanceId" --output text)
aws ec2 wait --region ap-southeast-1 instance-running --instance-ids $instance_id
When running this script, I always get a "InvalidInstanceID" exception on one of the two id's even though it is definitely the right ID. Secondly, if running the last command in the shell directly while starting the stack in parallel via the AWS console, it turns out that the wait command returns BEFORE the servers are actually up and running (which is the whole point of the exercise).
Lastly, I couldn't find any information about time outs which seem to be quite essential for a blocking async operation. where can the wait timeout be defined ?
Any idea whether there is a glitch in my code, or some specific consideration that I need to take into account?
The aws opsworks describe-instances command is using the --region us-east-1, but the aws ec2 wait command is using --region ap-southeast-1. Are you sure that your instances you're waiting for are in ap-southeast-1 as opposed to us-east-1?

Resources