knife vsphere requests root password - is unattended execution possible? - bash

Is there any way to ruyn the knife vsphere for unattended execution? I have a deploy shell script which I am using to help me:
cat deploy-production-20-vm.sh
#!/bin/bash
##############################################
# These are machine dependent variables (need to change)
##############################################
HOST_NAME=$1
IP_ADDRESS="$2/24"
CHEF_BOOTSTRAP_IP_ADDRESS="$2"
RUNLIST=\"$3\"
CHEF_HOST= $HOSTNAME.my.lan
##############################################
# These are psuedo-environment independent variables (could change)
##############################################
DATASTORE="dcesxds04"
##############################################
# These are environment dependent variables (should not change per env)
##############################################
TEMPLATE="\"CentOS\""
NETWORK="\"VM Network\""
CLUSTER="ProdCluster01" #knife-vsphere calls this a resource pool
GATEWAY="10.7.20.1"
DNS="\"10.7.20.11,10.8.20.11,10.6.20.11\""
##############################################
# the magic
##############################################
VM_CLONE_CMD="knife vsphere vm clone $HOST_NAME \
--template $TEMPLATE \
--cips $IP_ADDRESS \
--vsdc MarkleyDC\
--datastore $DATASTORE \
--cvlan $NETWORK\
--resource-pool $CLUSTER \
--cgw $GATEWAY \
--cdnsips $DNS \
--start true \
--bootstrap true \
--fqdn $CHEF_BOOTSTRAP_IP_ADDRESS \
--chost $HOST_NAME\
--cdomain my.lan \
--run-list=$RUNLIST"
echo $VM_CLONE_CMD
eval $VM_CLONE_CMD
Which echos (as a single line):
knife vsphere vm clone dcbsmtest --template "CentOS" --cips 10.7.20.84/24
--vsdc MarkleyDC --datastore dcesxds04 --cvlan "VM Network"
--resource-pool ProdCluster01 --cgw 10.7.20.1
--cdnsips "10.7.20.11,10.8.20.11,10.6.20.11" --start true
--bootstrap true --fqdn 10.7.20.84 --chost dcbsmtest --cdomain my.lan
--run-list="role[my-env-prod-server]"
When it runs it outputs:
Cloning template CentOS Template to new VM dcbsmtest
Finished creating virtual machine dcbsmtest
Powered on virtual machine dcbsmtest
Waiting for sshd...done
Doing old-style registration with the validation key at /home/me/chef-repo/.chef/our-validator.pem...
Delete your validation key in order to use your user credentials instead
Connecting to 10.7.20.84
root#10.7.20.84's password:
If I step away form my desk and it prompts for PWD - then sometimes it times out and the connection is lost and chef doesn't bootstrap. Also I would like to be able to automate all of this to be elastic based on system needs - which won't work with attended execution.

The idea I am going to run with, unless provided a better solution is to have a default password in the template and pass it on the command line to knife, and have chef change the password once the build is complete, minimizing the exposure of a hard coded password in the bash script controlling knife...
Update: I wanted to add that this is working like a charm. Ideally we could have changed the centOs template we were deploying - but it wasn't possible here - so this is a fine alternative (as we changed the root password after deploy anyhow).

Related

how to load array parameter in another shell file dynamically over ssh connection

I need to call my executable which is placed in an on-prem server by using an ssh connection and pass a dynamics parameter.
based on my requirement, users should be able to add or remove parameters as they want to work with the executable on the on-prem server.
I wrote a translator to identify any new parameter added to the console but now when I want to pass it via ssh, I am facing 2 problems.
what if I have a value that contains space?
how to load these values dynamically & use them as arguments on my shell script on the server?
**Also take note that I am sending some additional parameters that are not related to my executable argument but I need them as well.
params=(
"$MASTER"
"$NAME"
"$QUEUE"
service.enabled=true
)
for var_name in "${!conf__#}";
do
key=${var_name#conf__};
key=${key//_/.};
value=${!var_name};
params+=( --conf "$key=$value" );
done
echo "${params[#]}"
ssh -o StrictHostKeyChecking=no myuser#server_ip "/bin/bash -s" < deploy_script.sh "${params[#]}"
My deploy_script.sh file will be something like the below file.
#!/bin/bash
set -e
AR_MASTER=${1}
AR_NAME=${2}
AR_QUEUE=${3}
AR_SER_EN=${4}
# How can I get the other dynamic parameters???
main() {
my-executable \
"--master "$AR_MASTER \
"--name "$AR_NAME \
"--queue "$AR_QUEUE \
"--conf service.enabled="$AR_SER_EN \
??? #how to add the additional configuration dynamically?
}
main "$#"
Would you mind help me in figure it out?

.ssh/config: line 1: Bad configuration option: \342\200\234host

I am trying to deploy code from GitLab to the EC2 instance. However, I am getting the following errors when I run the pipeline
/home/gitlab-runner/.ssh/config: line 1: Bad configuration option: \342\200\234host
/home/gitlab-runner/.ssh/config: terminating, 1 bad configuration options
Here is my .gitlab-ci.yml file that I am using.
stages:
- QAenv
- Prod
Deploy to Staging:
stage: QAenv
tags:
- QA
before_script:
# Generates to connect to the AWS unit the SSH key.
- mkdir -p ~/.ssh
- echo -e “$SSH_PRIVATE_KEY” > ~/.ssh/id_rsa
# Sets the permission to 600 to prevent a problem with AWS
# that it’s too unprotected.
- chmod 600 ~/.ssh/id_rsa
- 'echo -e “Host *\n\tStrictHostKeyChecking no\n\n” > ~/.ssh/config'
script:
- bash ./gitlab-deploy/.gitlab-deploy.staging.sh
environment:
name: QAenv
# Exposes a button that when clicked take you to the defined URL:
url: https://your.url.com
Below is my .gitlab-deploy.staging.sh file that I have set up to deploy to my server.
# !/bin/bash
# Get servers list:
set — f
# Variables from GitLab server:
# Note: They can’t have spaces!!
string=$DEPLOY_SERVER
array=(${string//,/ })
for i in "${!array[#]}"; do
echo "Deploy project on server ${array[i]}"
ssh ubuntu#${array[i]} "cd /opt/bau && git pull origin master"
done
I checked my .ssh/config file contents and below is what I can see.
ubuntu#:/home/gitlab-runner/.ssh$ cat config
“Host *ntStrictHostKeyChecking nonn”
Any ideas about what I am doing wrong and what changes I should make?
The problem is with.
ubuntu#ip-172-31-42-114:/home/gitlab-runner/.ssh$ cat config
“Host *ntStrictHostKeyChecking nonn”
Because there are some Unicode characters here, which usually comes when we copy paste code from a document or a webpage.
In your case this “ char specifically you can see in the output as well.
replace that with " and check for others in your config and update should work.
There are more details in this question getting errors stray ‘\342’ and ‘\200’ and ‘\214’

How do I prompt for an MFA key to generate and use credentials for AWS CLI access?

I have several Bash scripts that invoke AWS CLI commands for which permissions have changed to require MFA, and I want to be able to prompt for a code generated by my MFA device in these scripts so that they can run with the necessary authentication.
But there seems to be no simple built in way to do this. The only documentation I can find involves a complicated process of using aws sts get-session-token and then saving each value in a configuration, which it is then unclear how to use.
To be clear what I'd like is that when I run one of my scripts that that contains AWS CLI commands that require MFA, I'm simply prompted for the code, so that providing it allows the AWS CLI operations to complete. Something like:
#!/usr/bin/env bash
# (1) prompt for generated MFA code
# ???
# (2) use entered code to generate necessary credentials
aws sts get-session-token ... --token-code $ENTERED_VALUE
# (3) perform my AWS CLI commands requiring MFA
# ....
It's not clear to me how to prompt for this when needed (which is probably down to not being proficient with bash) or how to use the output of get-session-token once I have it.
Is there a way to do what I'm looking for?
I've tried to trigger a prompt by specifying a --profile with a mfa_serial entry; but that doesn't work either.
Ok after spending more time on this script with a colleague - we have come up with a much simpler script. This does all the credential file work for you , and is much easier to read. It also allows for all your environments new tokens to be in the same creds file. The initial call to get you MFA requires your default account keys in the credentials file - then it generates your MFA token and puts them back in the credentials file.
#!/usr/bin/env bash
function usage {
echo "Example: ${0} dev 123456 "
exit 2
}
if [ $# -lt 2 ]
then
usage
fi
MFA_SERIAL_NUMBER=$(aws iam list-mfa-devices --profile bh${1} --query 'MFADevices[].SerialNumber' --output text)
function set-keys {
aws configure set aws_access_key_id ${2} --profile=${1}
aws configure set aws_secret_access_key ${3} --profile=${1}
aws configure set aws_session_token ${4} --profile=${1}
}
case ${1} in
dev|qa|prod) set-keys ${1} $(aws sts get-session-token --profile bh${1} --serial-number ${MFA_SERIAL_NUMBER} --query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]' --output text --token-code ${2});;
*) usage ;;
esac
Inspired by #strongjz and #Nick answers, I wrote a small Python command to which you can pipe the output of the aws sts command.
To install:
pip install sts2credentials
To use:
aws sts get-session-token \
--serial-number arn:aws:iam::123456789012:mfa/your-iam-user \
--token-code 123456 \
--profile=your-profile-name \
| sts2credentials
This will automatically add the access key ID, the secret access key, and the session token under a new "sts" profile in your ~/.aws/credentials file.
For bash you could read in the value, then set those values from the sts output
echo "Type the mfa code that you want to use (4 digits), followed by [ENTER]:"
read ENTERED_VALUE
aws sts get-session-token ... --token-code $ENTERED_VALUE
then you'll have to parse the output of the sts call which has the access key, secret and session token.
{
Credentials: {
AccessKeyId: "ASIAJPC6D7SKHGHY47IA",
Expiration: 2016-06-05 22:12:07 +0000 UTC,
SecretAccessKey: "qID1YUDHaMPet5xw/vpw1Wk8SKPilFihdiMSdSIj",
SessionToken: "FQoDYXdzEB4aDLwmzouEQ3eckfqJxyLOARbBGasdCaAXkZ7ABOcOCNx2/7sS8N7A6Dpcax/t2G8KNTcUkRLdxI0gTvPoKQeZrH8wUrL4UxFFP6kCWEasdVIBAoUfuhdeUa1a7H216Mrfbbv3rMGsVKUoJT2Ar3r0pYgsYxizOWzH5VaA4rmd5gaQvfSFmasdots3WYrZZRjN5nofXJBOdcRd6J94k8m5hY6ClfGzUJEqKcMZTrYkCyUu3xza2S73CuykGM2sePVNH9mGZCWpTBcjO8MrawXjXj19UHvdJ6dzdl1FRuKdKKeS18kF"
}
}
then set them
aws configure set aws_access_key_id default_access_key --profile NAME_PROFILE
aws configure set aws_secret_access_key default_secret_key --profile NAME_PROFILE
aws configure set default.region us-west-2 --profile
aws some_commmand --profile NAME_PROFILE
http://www.tldp.org/LDP/Bash-Beginners-Guide/html/sect_08_02.html
AWS STS API Reference
http://docs.aws.amazon.com/STS/latest/APIReference/API_GetSessionToken.html
AWS CLI STS Command
http://docs.aws.amazon.com/cli/latest/reference/sts/get-session-token.html
I wrote something very similar to what you are trying to in Go, here but this is for the sts assumerole not get-session-token.
I wrote a simple script to set the AWS credentials file for a profile called mfa. Then all bash scripts you write just need to have the "--profile mfa" added so they will just work. This also allows for multiple AWS accounts - as many of us have those these days. I'm sure this can be improved - but it was quick and dirty and does what you want and everything I need.
You will have to amend facts in the script to fit your account details - I have marked them clearly with chevrons < >. NB Obviously once you have populated the script with all your details it is not to be copied about - unless you want unintended consequences. This uses recursion within the credentials file - as the standard access keys are called each time to create the mfa security tokens.
#!/bin/bash
# Change for your username - would be /home/username on Linux/BSD
dir='/Users/<your-user-name>'
region=us-east-1
function usage {
echo "Must enter mfa token and then either dev/qa/prod"
echo "i.e. mfa-set-aws-profile.sh 123456 qa"
exit 2
}
if [[ $1 == "" ]]
then
echo "Must give me a token - how do you expect this to work - DOH :-)"
usage
exit 2
fi
# Write the output from sts command to a json file for parsing
# Just add accounts below as required
case $2 in
dev) aws sts get-session-token --profile dev --serial-number arn:aws:iam::<123456789>:mfa/<john.doe> --token-code $1 > $dir/mfa-json;;
qa) aws sts get-session-token --profile qa --serial-number arn:aws:iam::<123456789>:mfa/<john.doe> --token-code $1 > $dir/mfa-json;;
-h) usage ;;
*) usage ;;
esac
# Remove quotes and comma's to make the file easier to parse -
# N.B. gsed is for OSX - on Linux/BSD etc sed should be just fine.
/usr/local/bin/gsed -i 's/\"//g;s/\,//g' $dir/mfa-json
# Parse the mfa info into vars for use in aws credentials file
seckey=`cat $dir/mfa-json | grep SecretAccessKey | gsed -E 's/[[:space:]]+SecretAccessKey\: //g'`
acckey=`cat $dir/mfa-json | grep AccessKeyId | gsed 's/[[:space:]]+AccessKeyId\: //g'`
sesstok=`cat $dir/mfa-json | grep SessionToken | gsed 's/[[:space:]]+SessionToken\: //g'`
# output all the gathered info into your aws credentials file.
cat << EOF > $dir/.aws/credentials
[default]
aws_access_key_id = <your normal keys here if required>
aws_secret_access_key = <your normal keys here if required>
[dev]
aws_access_key_id = <your normal keys here >
aws_secret_access_key = <your normal keys here >
[qa]
aws_access_key_id = <your normal keys here >
aws_secret_access_key = <your normal keys here >
[mfa]
output = json
region = $region
aws_access_key_id = $acckey
aws_secret_access_key = $seckey
aws_session_token = $sesstok
EOF

What gems do you recommend to use for this kind of automation?

I have to create a script to manage maintenance pages server for my hosting company.
I will need to do a CLI interface that would act like this (example scenario) :
(here, let's suppose that mcli is the name of the script, 1.1.1.1 the original server address (that host the website, www.exemple.com)
Here I just create the loopback interface on the maintenance server with the original ip address and create the nginx site-specific config file in sites-enabled
$ mcli register www.exemple.com 1.1.1.1
[DEBUG] Adding IP 1.1.1.1 to new loopback interface lo:001001001001
[WARNING] No root directory specified, setting default maintenance page.
[DEBUG] Registering www.exemple.com maintenance page and reloading Nginx: OK
Then when I want to enable the maintenance page and completely shutdown the website:
$ mcli maintenance www.exemple.com
[DEBUG] Connecting to router with SSH: OK
[DEBUG] Setting new route to 1.1.1.1 to maintenance server: OK
[DEBUG] Writing configuration: Ok
Then removing the maintenance page:
$ mcli nomaintenance www.exemple.com
[DEBUG] Connecting to router with SSH: OK
[DEBUG] Removing route to 1.1.1.1: Ok
[DEBUG] Writing configuration: Ok
And I would need a function to see the actual states of the websites
$ mcli list
+------------------+-----------------+------------------+
| Site Name | Server I.P | Maintenance mode |
+------------------+-----------------+------------------+
| www.example.com | 1.1.1.1 | Enabled |
| www.example.org | 1.1.1.2 | Disabled |
+------------------+-----------------+------------------+
$ mcli show www.example.org
Site Name: www.example.org
Server I.P: 1.1.1.1
Maintenance Mode: Enabled
Root Directory : /var/www/maintenance/default/
But I never did this kind of scripting with Ruby. What gems do you recommend for this kind of things ? For command line parsing ? Column/Colorized output ? SSH connection (needed to connect to cisco routers)
Do you recommend me to use a local database (sqlite) to store meta datas (Stages changes, actual states) or do you recommend me to compute on the fly by analyzing nginx/interfaces configuration files and using syslog for monitoring changes done with this script ?
This script will be used at first time for a massive datacenter physical migration, and next for standard usages for scheduled downtimes.
Thank you
First of all, I'd recommend you get a copy of Build awesome command-line applications in Ruby.
That said, you might want to check
GLI command line parsing like git
OptionParser command line parsing
Personally, I'd go for the SQLite approach for storing data, but I'm biased (having a strong SQL background).
Thor is a good gem for handling CLI options. It allows this type of organization in your script:
class Maintenance < Thor
desc "maintenance", "put up maintenance page"
method_option :switch, :aliases => '-s', :type => 'string'
#The method name is the name of the task that would be run => mcli maintenance
def maintenance
#do stuff
end
no_tasks do
#methods that you don't want cli tasks for go here
end
end
Maintenance.start
I don't really have any good suggestions for column/colorized output though.
I definitely recommend using some kind of a database to store states though. Maybe not sqlite, I would probably opt for maybe a redis database that stores key/value pairs with the information you are looking for.
We have similar task. I use next architecture
Small application (C) what generate config file
Add nginx init.d script new switch update_clusters. This script will restart nginx only if config file is changed
update_clusters() {
${CONF_GEN} --outfile=/tmp/nginx_clusters.conf
RETVAL=$?
if [[ "$RETVAL" != "0" ]]; then
return 5
fi
if ! diff ${CLUSTER_CONF_FILE} /tmp/nginx_clusters.conf > /dev/null; then
echo "Cluster configuration changed. Reload service"
mv -f /tmp/nginx_clusters.conf ${CLUSTER_CONF_FILE}
reload
fi
}
Set of bash scripts to add records to database.
Web console to add/modify/delete records in database (extjs+nginx module)

How do I make cloud-init startup scripts run every time my EC2 instance boots?

I have an EC2 instance running an AMI based on the Amazon Linux AMI. Like all such AMIs, it supports the cloud-init system for running startup scripts based on the User Data passed into every instance. In this particular case, my User Data input happens to be an Include file that sources several other startup scripts:
#include
http://s3.amazonaws.com/path/to/script/1
http://s3.amazonaws.com/path/to/script/2
The first time I boot my instance, the cloud-init startup script runs correctly. However, if I do a soft reboot of the instance (by running sudo shutdown -r now, for instance), the instance comes back up without running the startup script the second time around. If I go into the system logs, I can see:
Running cloud-init user-scripts
user-scripts already ran once-per-instance
[ OK ]
This is not what I want -- I can see the utility of having startup scripts that only run once per instance lifetime, but in my case these should run every time the instance starts up, like normal startup scripts.
I realize that one possible solution is to manually have my scripts insert themselves into rc.local after running the first time. This seems burdensome, however, since the cloud-init and rc.d environments are subtly different and I would now have to debug scripts on first launch and all subsequent launches separately.
Does anyone know how I can tell cloud-init to always run my scripts? This certainly sounds like something the designers of cloud-init would have considered.
In 11.10, 12.04 and later, you can achieve this by making the 'scripts-user' run 'always'.
In /etc/cloud/cloud.cfg you'll see something like:
cloud_final_modules:
- rightscale_userdata
- scripts-per-once
- scripts-per-boot
- scripts-per-instance
- scripts-user
- keys-to-console
- phone-home
- final-message
This can be modified after boot, or cloud-config data overriding this stanza can be inserted via user-data. Ie, in user-data you can provide:
#cloud-config
cloud_final_modules:
- rightscale_userdata
- scripts-per-once
- scripts-per-boot
- scripts-per-instance
- [scripts-user, always]
- keys-to-console
- phone-home
- final-message
That can also be '#included' as you've done in your description.
Unfortunately, right now, you cannot modify the 'cloud_final_modules', but only override it. I hope to add the ability to modify config sections at some point.
There is a bit more information on this in the cloud-config doc at
https://github.com/canonical/cloud-init/tree/master/doc/examples
Alternatively, you can put files in /var/lib/cloud/scripts/per-boot , and they'll be run by the 'scripts-per-boot' path.
In /etc/init.d/cloud-init-user-scripts, edit this line:
/usr/bin/cloud-init-run-module once-per-instance user-scripts execute run-parts ${SCRIPT_DIR} >/dev/null && success || failure
to
/usr/bin/cloud-init-run-module always user-scripts execute run-parts ${SCRIPT_DIR} >/dev/null && success || failure
Good luck !
cloud-init supports this now natively, see runcmd vs bootcmd command descriptions in the documentation (http://cloudinit.readthedocs.io/en/latest/topics/examples.html#run-commands-on-first-boot):
"runcmd":
#cloud-config
# run commands
# default: none
# runcmd contains a list of either lists or a string
# each item will be executed in order at rc.local like level with
# output to the console
# - runcmd only runs during the first boot
# - if the item is a list, the items will be properly executed as if
# passed to execve(3) (with the first arg as the command).
# - if the item is a string, it will be simply written to the file and
# will be interpreted by 'sh'
#
# Note, that the list has to be proper yaml, so you have to quote
# any characters yaml would eat (':' can be problematic)
runcmd:
- [ ls, -l, / ]
- [ sh, -xc, "echo $(date) ': hello world!'" ]
- [ sh, -c, echo "=========hello world'=========" ]
- ls -l /root
- [ wget, "http://slashdot.org", -O, /tmp/index.html ]
"bootcmd":
#cloud-config
# boot commands
# default: none
# this is very similar to runcmd, but commands run very early
# in the boot process, only slightly after a 'boothook' would run.
# bootcmd should really only be used for things that could not be
# done later in the boot process. bootcmd is very much like
# boothook, but possibly with more friendly.
# - bootcmd will run on every boot
# - the INSTANCE_ID variable will be set to the current instance id.
# - you can use 'cloud-init-per' command to help only run once
bootcmd:
- echo 192.168.1.130 us.archive.ubuntu.com >> /etc/hosts
- [ cloud-init-per, once, mymkfs, mkfs, /dev/vdb ]
also note the "cloud-init-per" command example in bootcmd. From it's help:
Usage: cloud-init-per frequency name cmd [ arg1 [ arg2 [ ... ] ]
run cmd with arguments provided.
This utility can make it easier to use boothooks or bootcmd
on a per "once" or "always" basis.
If frequency is:
* once: run only once (do not re-run for new instance-id)
* instance: run only the first boot for a given instance-id
* always: run every boot
One possibility, although somewhat hackish, is to delete the lock file that cloud-init uses to determine whether or not the user-script has already run. In my case (Amazon Linux AMI), this lock file is located in /var/lib/cloud/sem/ and is named user-scripts.i-7f3f1d11 (the hash part at the end changes every boot). Therefore, the following user-data script added to the end of the Include file will do the trick:
#!/bin/sh
rm /var/lib/cloud/sem/user-scripts.*
I'm not sure if this will have any adverse effects on anything else, but it has worked in my experiments.
please use the below script above your bash script.
example: here m printing hello world to my file
stop instance before adding to userdata
script
Content-Type: multipart/mixed; boundary="//"
MIME-Version: 1.0
--//
Content-Type: text/cloud-config; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cloud-config.txt"
#cloud-config
cloud_final_modules:
- [scripts-user, always]
--//
Content-Type: text/x-shellscript; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="userdata.txt"
#!/bin/bash
/bin/echo "Hello World." >> /var/tmp/sdksdfjsdlf
--//
I struggled with this issue for almost two days, tried all of the solutions I could find and finally, combining several approaches, came up with the following:
MyResource:
Type: AWS::EC2::Instance
Metadata:
AWS::CloudFormation::Init:
configSets:
setup_process:
- "prepare"
- "run_for_instance"
prepare:
commands:
01_apt_update:
command: "apt-get update"
02_clone_project:
command: "mkdir -p /replication && rm -rf /replication/* && git clone https://github.com/awslabs/dynamodb-cross-region-library.git /replication/dynamodb-cross-region-library/"
03_build_project:
command: "mvn install -DskipTests=true"
cwd: "/replication/dynamodb-cross-region-library"
04_prepare_for_apac:
command: "mkdir -p /replication/replication-west && rm -rf /replication/replication-west/* && cp /replication/dynamodb-cross-region-library/target/dynamodb-cross-region-replication-1.2.1.jar /replication/replication-west/replication-runner.jar"
run_for_instance:
commands:
01_run:
command: !Sub "java -jar replication-runner.jar --sourceRegion us-east-1 --sourceTable ${TableName} --destinationRegion ap-southeast-1 --destinationTable ${TableName} --taskName -us-ap >/dev/null 2>&1 &"
cwd: "/replication/replication-west"
Properties:
UserData:
Fn::Base64:
!Sub |
#cloud-config
cloud_final_modules:
- [scripts-user, always]
runcmd:
- /usr/local/bin/cfn-init -v -c setup_process --stack ${AWS::StackName} --resource MyResource --region ${AWS::Region}
- /usr/local/bin/cfn-signal -e $? --stack ${AWS::StackName} --resource MyResource --region ${AWS::Region}
This is the setup for DynamoDb cross-region replication process.
If someone wants to do this on CDK, here's a python example.
For Windows, user data has a special persist tag, but for Linux, you need to use MultiPart User data to setup cloud-init first. This Linux example worked with cloud-config (see ref blog) part type instead of cloud-boothook which requires a cloud-init-per (see also bootcmd) call I couldn't test out (eg: cloud-init-pre always).
Linux example:
# Create some userdata commands
instance_userdata = ec2.UserData.for_linux()
instance_userdata.add_commands("apt update")
# ...
# Now create the first part to make cloud-init run it always
cinit_conf = ec2.UserData.for_linux();
cinit_conf .add_commands('#cloud-config');
cinit_conf .add_commands('cloud_final_modules:');
cinit_conf .add_commands('- [scripts-user, always]');
multipart_ud = ec2.MultipartUserData()
#### Setup to run every time instance starts
multipart_ud.add_part(ec2.MultipartBody.from_user_data(cinit_conf , content_type='text/cloud-config'))
#### Add the commands desired to run every time
multipart_ud.add_part(ec2.MultipartBody.from_user_data(instance_userdata));
ec2.Instance(
self, "myec2",
userdata=multipart_ud,
#other required config...
)
Windows example:
instance_userdata = ec2.UserData.for_windows()
# Bootstrap
instance_userdata.add_commands("Write-Output 'Run some commands'")
# ...
# Making all the commands persistent - ie: running on each instance start
data_script = instance_userdata.render()
data_script += "<persist>true</persist>"
ud = ec2.UserData.custom(data_script)
ec2.Instance(
self, "myWinec2",
userdata=ud,
#other required config...
)
Another approach is to use #cloud-boothook in your user data script. From the docs:
Cloud Boothook
Begins with #cloud-boothook or Content-Type: text/cloud-boothook.
This content is boothook data. It is stored in a file under /var/lib/cloud and then executed immediately.
This is the earliest "hook" available. There is no mechanism provided for running it only one time. The boothook must take care
of this itself. It is provided with the instance ID in the environment
variable INSTANCE_ID. Use this variable to provide a once-per-instance
set of boothook data.

Resources