Why Jetty boot script remove [SK][0-9] from base name? - shell

I read Jetty 9 boot script and found this:
#!/usr/bin/env bash
#
# Startup script for jetty under *nix systems (it works under NT/cygwin too).
##################################################
# Set the name which is used by other variables.
# Defaults to the file name without extension.
##################################################
NAME=$(echo $(basename $0) | sed -e 's/^[SK][0-9]*//' -e 's/\.sh$//')
# To get the service to restart correctly on reboot, uncomment below (3 lines):
# ========================
# chkconfig: 3 99 99
# description: Jetty 9 webserver
# processname: jetty
# ========================
I wonder why we should remove S / K and numbers from NAME.
/search/S2jetty.sh => jetty
Anybody who can explain it?
Many thanks!

That's trying to get the name from the init script that is currently running.
The links in the /etc/init.d directory all start with an S (for start) or a K (for kill) and a number (to control sort-order).
That letter+number combination is an artifact of the service system and not part of the name of the service in question so that prefix is being removed.

Related

net-snmp on start says - "Error opening specified endpoint " in Raspberry pi - Jessica

My raspberry pi - jessica has got snmpd --version
NET-SNMP version: 5.7.2.1
Web: http://www.net-snmp.org/
Email: net-snmp-coders#lists.sourceforge.net
Now, I am trying to make a subagent application and running it.
When I try to run this I get the following error -
$ sudo snmpd -f -Lo -C --rwcommunity=public --master=agentx --agentXSocket=tcp:localhost:1705
pcilib: Cannot open /proc/bus/pci
pcilib: Cannot find any working access method.
pcilib: pci_init failed
error on subcontainer 'ia_addr' insert (-1)
Turning on AgentX master support.
Error opening specified endpoint ""
Server Exiting with code 1
Why I get this error? Here is my snmpd.conf file.
###############################################################################
#
# EXAMPLE.conf:
# An example configuration file for configuring the Net-SNMP agent ('snmpd')
# See the 'snmpd.conf(5)' man page for details
#
# Some entries are deliberately commented out, and will need to be explicitly activated
#
###############################################################################
#
# AGENT BEHAVIOUR
#
# Listen for connections from the local system only
agentAddress udp:127.0.0.1:161
# Listen for connections on all interfaces (both IPv4 *and* IPv6)
#agentAddress udp:161,udp6:[::1]:161
###############################################################################
#
# SNMPv3 AUTHENTICATION
#
# Note that these particular settings don't actually belong here.
# They should be copied to the file /var/lib/snmp/snmpd.conf
# and the passwords changed, before being uncommented in that file *only*.
# Then restart the agent
# createUser authOnlyUser MD5 "remember to change this password"
# createUser authPrivUser SHA "remember to change this one too" DES
# createUser internalUser MD5 "this is only ever used internally, but still change the password"
# If you also change the usernames (which might be sensible),
# then remember to update the other occurances in this example config file to match.
###############################################################################
#
# ACCESS CONTROL
#
# system + hrSystem groups only
view systemonly included .1.3.6.1.2.1.1
view systemonly included .1.3.6.1.2.1.25.1
# Full access from the local host
#rocommunity public localhost
# Default access to basic system info
rocommunity public default -V systemonly
# Full access from an example network
# Adjust this network address to match your local
# settings, change the community string,
# and check the 'agentAddress' setting above
#rocommunity secret 10.0.0.0/16
# Full read-only access for SNMPv3
rouser authOnlyUser
# Full write access for encrypted requests
# Remember to activate the 'createUser' lines above
#rwuser authPrivUser priv
# It's no longer typically necessary to use the full 'com2sec/group/access' configuration
# r[ow]user and r[ow]community, together with suitable views, should cover most requirements
###############################################################################
#
# SYSTEM INFORMATION
#
# Note that setting these values here, results in the corresponding MIB objects being 'read-only'
# See snmpd.conf(5) for more details
sysLocation Sitting on the Dock of the Bay
sysContact Me <me#example.org>
# Application + End-to-End layers
sysServices 72
#
# Process Monitoring
#
# At least one 'mountd' process
proc mountd
# No more than 4 'ntalkd' processes - 0 is OK
proc ntalkd 4
# At least one 'sendmail' process, but no more than 10
proc sendmail 10 1
# Walk the UCD-SNMP-MIB::prTable to see the resulting output
# Note that this table will be empty if there are no "proc" entries in the snmpd.conf file
#
# Disk Monitoring
#
# 10MBs required on root disk, 5% free on /var, 10% free on all other disks
disk / 10000
disk /var 5%
includeAllDisks 10%
# Walk the UCD-SNMP-MIB::dskTable to see the resulting output
# Note that this table will be empty if there are no "disk" entries in the snmpd.conf file
#
# System Load
#
# Unacceptable 1-, 5-, and 15-minute load averages
load 12 10 5
# Walk the UCD-SNMP-MIB::laTable to see the resulting output
# Note that this table *will* be populated, even without a "load" entry in the snmpd.conf file
###############################################################################
#
# ACTIVE MONITORING
#
# send SNMPv1 traps
trapsink localhost public
# send SNMPv2c traps
#trap2sink localhost public
# send SNMPv2c INFORMs
#informsink localhost public
# Note that you typically only want *one* of these three lines
# Uncommenting two (or all three) will result in multiple copies of each notification.
#
# Event MIB - automatically generate alerts
#
# Remember to activate the 'createUser' lines above
iquerySecName internalUser
rouser internalUser
# generate traps on UCD error conditions
defaultMonitors yes
# generate traps on linkUp/Down
linkUpDownNotifications yes
###############################################################################
#
# EXTENDING THE AGENT
#
#
# Arbitrary extension commands
#
extend test1 /bin/echo Hello, world!
extend-sh test2 echo Hello, world! ; echo Hi there ; exit 35
#extend-sh test3 /bin/sh /tmp/shtest
# Note that this last entry requires the script '/tmp/shtest' to be created first,
# containing the same three shell commands, before the line is uncommented
# Walk the NET-SNMP-EXTEND-MIB tables (nsExtendConfigTable, nsExtendOutput1Table
# and nsExtendOutput2Table) to see the resulting output
# Note that the "extend" directive supercedes the previous "exec" and "sh" directives
# However, walking the UCD-SNMP-MIB::extTable should still returns the same output,
# as well as the fuller results in the above tables.
#
# "Pass-through" MIB extension command
#
#pass .1.3.6.1.4.1.8072.2.255 /bin/sh PREFIX/local/passtest
#pass .1.3.6.1.4.1.8072.2.255 /usr/bin/perl PREFIX/local/passtest.pl
# Note that this requires one of the two 'passtest' scripts to be installed first,
# before the appropriate line is uncommented.
# These scripts can be found in the 'local' directory of the source distribution,
# and are not installed automatically.
# Walk the NET-SNMP-PASS-MIB::netSnmpPassExamples subtree to see the resulting output
#
# AgentX Sub-agents
#
# Run as an AgentX master agent
master agentx
# Listen for network connections (from localhost)
# rather than the default named socket /var/agentx/master
#agentXSocket tcp:localhost:705
It's probably too late for Bali Vinayak, but might help others with like issue.
On an Ubuntu 16.04.5 LTS, NET-SNMP version: 5.7.3 I solved the same error setting snmpd to use TCP instead instead of UDP.
So edit /etc/snmp/snmpd.conf:
agentAddress tcp:127.0.0.1:161
Then I got no error running the startup command:
systemctl start snmpd.service
There is a bug in the /etc/snmp/snmpd.conf file.
Search for
trapsink localhost public
and add :162 after localhost, such as
trapsink localhost:162 public
Reference: https://github.com/net-snmp/net-snmp/issues/34
I resolved it by edit /etc/snmp/snmpd.conf to this:
# agentaddress 127.0.0.1,[::1]
agentaddress 127.0.0.1
There is also another, simpler case which results in the "Error opening specified endpoint" message. Because other posts seem to be referencing to this at least indirectly, I will post it under this topic also. See my case:
root#am335x-evm:~# /etc/init.d/snmpd stop
Stopping network management services: snmpd snmptrapd.
Then trying to start snmpd manually:
root#am335x-evm:~# /usr/sbin/snmpd -Lo -a -f
Error opening specified endpoint "127.0.0.1"
Server Exiting with code 1
But look:
root#am335x-evm:~# ps aux | grep snmp
root 562 0.4 3.7 30944 9168 ? Ssl 18:17 0:00 /usr/sbin/snmpd -Ls0-6d -a -f
root 570 0.0 1.9 8880 4792 ? Ss 18:17 0:00 /usr/sbin/snmptrapd -Lsd -f
root 692 0.0 0.6 2352 1636 pts/0 S+ 18:21 0:00 grep snmp
So, the init script did not actually stop it. Stopping it properly:
root#am335x-evm:~# systemctl stop snmpd
root#am335x-evm:~# ps aux | grep snmp
root 570 0.0 1.9 8880 4792 ? Ss 18:17 0:00 /usr/sbin/snmptrapd -Lsd -f
root 708 0.0 0.6 2352 1520 pts/0 S+ 18:24 0:00 grep snmp
Then trying running:
root#am335x-evm:~# /usr/sbin/snmpd -Lo -a -f
NET-SNMP version 5.8
It works.

Why I cannot scale openshift cart having an environment variable defined?

I am trying to integrate Hazelcast in Tomcat cartridge (https://github.com/worldline/openshift-cartridge-tomcat). The problem is to retrieve ip:ports of gears of a scalable application. I looked at how vert.x does it and it is perfectly fine. It uses pub/sub mechanist to set ip and port in the future environment variable. I can see this in hook folder "set-vertex-cluster" file.
echo $list > $OPENSHIFT_VERTX_DIR/env/OPENSHIFT_VERTX_HAZELCAST_CLUSTER
I did id the same way replacing VERTX by TOMCAT (short name of cart).
But after creating the app there is no OPENSHIFT_TOMCAT_HAZELCAST_CLUSTER env variable.
I looked at how JbossEAP does it. It has
touch ${OPENSHIFT_JBOSSEAP_DIR}/env/OPENSHIFT_JBOSSEAP_CLUSTER
https://github.com/openshift/origin-server/blob/master/cartridges/openshift-origin-cartridge-jbosseap/bin/install
It worked for me, and finally I see OPENSHIFT_VERTX_HAZELCAST_CLUSTER env var and it is populated by gear_ip:gear_port.It's good. But when I scale the app I get the error:
Activation of new gears failed: 53d2ed31e0b8cd2bba00051f: Error activating gear: CLIENT_ERROR: Failed to execute: 'control start' for /var/lib/openshift/53d2ed31e0b8cd2bba00051f/tomcat
Unable to complete the requested operation due to: An invalid exit code (1) was returned from the server ex-std-node92.prod.rhcloud.com. This indicates an unexpected problem during the execution of your request.
Reference ID: 0b4e8a465d1901e8317a18739586e6d1
OPENSHIFT_VERTX_HAZELCAST_CLUSTER is populated by gear1_ip:gear1_port and gear2_ip:gear2_port but of course the second gear failed to start.
When I remove
touch ${OPENSHIFT_TOMCAT_DIR}/env/OPENSHIFT_TOMCAT_HAZELCAST_CLUSTER
from bin/install file everything is fine! Except that I don't have list of cluster members...
I am going mad, struggling with the problem all day long. Can anybody help me, please?
UPDATED:
• Customer A create an OpenShift Application A1 with the Git downloadable cartridge
• OpenShift install the downloadable cartridge into Node N1 and install it into the Application A1.
• Customer A now want to scale the application A1.
• OpenShift try to scale the application A1 by acquiring a new gear in Node N2 (notice that it is different from N1 above) and copy the content from A1 into N2 (but somehow will not copy all the environment variables and necessary settings). In .env folder of every gear
• The gear creation now fail on N2, because the downable cartridge content is not available on N2, because of following commands in bin/tomcat
# Filter user-owned configuration files through sed to replace all
# ${OPENSHIFT_*} variables with their actual values, and write the
# resulting filtered files to the live Tomcat configuration location.
sed_replace_env=$(print_sed_exp_replace_env_var)
replacement_conf_files=(
"server.xml"
"context.xml"
)
for conf_file in "${replacement_conf_files[#]}"; do
sed ${sed_replace_env} ${OPENSHIFT_REPO_DIR}/.openshift/config/${conf_file} > ${OPENSHIFT_TOMCAT_DIR}/conf/${conf_file}
done
The particular function
function print_sed_exp_replace_env_var {
sed_exp=""
for openshift_var in $(env | grep OPENSHIFT_ | awk -F '=' '{print $1}')
do
# environment variable values that contain " or / need to be escaped
# or they will cause problems in the sed command line.
variable_val=$(echo "${!openshift_var}" | sed -e "s#\/#\\\\/#g" | sed -e "s/\"/\\\\\"/g")
# the entire sed s/search/replace/g command needs to be quoted in case the variable value
# contains a space.
sed_exp="${sed_exp} -e \"s/\\\${env.${openshift_var}}/${variable_val}/g\""
done
printf "%s\n" "$sed_exp"
}
As you can see, there is a dangerous of the sed command that may replace entire file into blank file for server.xml and context.xml if the environment variables are not present.
The correct order that OpenShift should perform is:
Customer A create an OpenShift Application A1 with the Git downloadable cartridge
• OpenShift install the downloadable cartridge into Node N1 and install it into the Application A1.
• Customer A now want to scale the application A1.
• OpenShift try to scale the application A1 by acquiring a new gear in Node N2 (notice that it is different from N1 above). Copy all necessary environment variables into the new gears as well
• There is a script within your cartridge that require the server.xml and context.xml template from downloadable cartridge, it can now be successfully found and copy.

Bash case not properly evaluating value

The Problem
I have a script that has a case statement which I'm expecting to execute based on the value of a variable. The case statement appears to either ignore the value or not properly evaluate it instead dropping to the default.
The Scenario
I pull a specific character out of our server hostnames which indicates where in our environment the server resides. We have six different locations:
Management(m): servers that are part of the infrastructure such as monitoring, email, ticketing, etc
Development(d): servers that are for developing code and application functionality
Test(t): servers that are used for initial testing of the code and application functionality
Implementation(i): servers that the code is pushed to for pre-production evaluation
Production(p): self-explanatory
Services(s): servers that the customer needs to integrate that provide functionality across their project. These are separate from the Management servers in that these are customer servers while Management servers are owned and operated by us.
After pulling the character from the hostname I pass it to a case block. I expect the case block to evaluate the character and add a couple lines of text to our rsyslog.conf file. What is happening instead is that the case block returns the default which does nothing but tell the person building the server to manually configure the entry due to an unrecognized character.
I've tested this manually against a server I recently built and verified that the character I am pulling from the hostname (an 's') is expected and accounted for in the case block.
The Code
# Determine which environment our server resides in
host=$(hostname -s)
env=${host:(-8):1}
OLDFILE=/etc/rsyslog.conf
NEWFILE=/etc/rsyslog.conf.new
# This is the configuration we need on every server regardless of environment
read -d '' common <<- EOF
...
TEXT WHICH IS ADDED TO ALL CONFIG FILES REGARDLESS OF FURTHER CODE EXECUTION
SNIPPED
....
EOF
# If a server is in the Management, Dev or Test environments send logs to lg01
read -d '' lg01conf <<- EOF
# Relay messages to lg01
*.notice ##xxx.xxx.xxx.100
#### END FORWARDING RULE ####
EOF
# If a server is in the Imp, Prod or is a non-affiliated Services zone server send logs to lg02
read -d '' lg02conf <<- EOF
# Relay messages to lg02
*.notice ##xxx.xxx.xxx.101
#### END FORWARDING RULE ####
EOF
# The general rsyslog configuration remains the same; pull it out and write it to a new file
head -n 63 $OLDFILE > $NEWFILE
# Add the common language to our config file
echo "$common" >> $NEWFILE
# Depending on which environment ($env) our server is in, add the appropriate
# remote log server to the configuration with the $common settings.
case $env in
m) echo "$lg01conf" >> $NEWFILE;;
d) echo "$lg01conf" >> $NEWFILE;;
t) echo "$lg01conf" >> $NEWFILE;;
i) echo "$lg02conf" >> $NEWFILE;;
p) echo "$lg02conf" >> $NEWFILE;;
s) echo "$lg02conf" >> $NEWFILE;;
*) echo "Unknown environment; Manually configure"
esac
# Keep a dated backup of the original rsyslog.conf file
cp $OLDFILE $OLDFILE.$(date +%Y%m%d)
# Replace the original rsyslog.conf file with the new version
mv $NEWFILE $OLDFILE
An Aside
I've already determined that I can combine the different groups of code from the case block onto single lines (a total of two) using the | operator. I've listed it in the manner above since this is how it is coded while I'm having issues with it.
I can't see what's wrong with your code. Maybe add another ;; to the default clause. To find the problem add a set -vx as a first line. Will show you lots of debug information.

How do I make cloud-init startup scripts run every time my EC2 instance boots?

I have an EC2 instance running an AMI based on the Amazon Linux AMI. Like all such AMIs, it supports the cloud-init system for running startup scripts based on the User Data passed into every instance. In this particular case, my User Data input happens to be an Include file that sources several other startup scripts:
#include
http://s3.amazonaws.com/path/to/script/1
http://s3.amazonaws.com/path/to/script/2
The first time I boot my instance, the cloud-init startup script runs correctly. However, if I do a soft reboot of the instance (by running sudo shutdown -r now, for instance), the instance comes back up without running the startup script the second time around. If I go into the system logs, I can see:
Running cloud-init user-scripts
user-scripts already ran once-per-instance
[ OK ]
This is not what I want -- I can see the utility of having startup scripts that only run once per instance lifetime, but in my case these should run every time the instance starts up, like normal startup scripts.
I realize that one possible solution is to manually have my scripts insert themselves into rc.local after running the first time. This seems burdensome, however, since the cloud-init and rc.d environments are subtly different and I would now have to debug scripts on first launch and all subsequent launches separately.
Does anyone know how I can tell cloud-init to always run my scripts? This certainly sounds like something the designers of cloud-init would have considered.
In 11.10, 12.04 and later, you can achieve this by making the 'scripts-user' run 'always'.
In /etc/cloud/cloud.cfg you'll see something like:
cloud_final_modules:
- rightscale_userdata
- scripts-per-once
- scripts-per-boot
- scripts-per-instance
- scripts-user
- keys-to-console
- phone-home
- final-message
This can be modified after boot, or cloud-config data overriding this stanza can be inserted via user-data. Ie, in user-data you can provide:
#cloud-config
cloud_final_modules:
- rightscale_userdata
- scripts-per-once
- scripts-per-boot
- scripts-per-instance
- [scripts-user, always]
- keys-to-console
- phone-home
- final-message
That can also be '#included' as you've done in your description.
Unfortunately, right now, you cannot modify the 'cloud_final_modules', but only override it. I hope to add the ability to modify config sections at some point.
There is a bit more information on this in the cloud-config doc at
https://github.com/canonical/cloud-init/tree/master/doc/examples
Alternatively, you can put files in /var/lib/cloud/scripts/per-boot , and they'll be run by the 'scripts-per-boot' path.
In /etc/init.d/cloud-init-user-scripts, edit this line:
/usr/bin/cloud-init-run-module once-per-instance user-scripts execute run-parts ${SCRIPT_DIR} >/dev/null && success || failure
to
/usr/bin/cloud-init-run-module always user-scripts execute run-parts ${SCRIPT_DIR} >/dev/null && success || failure
Good luck !
cloud-init supports this now natively, see runcmd vs bootcmd command descriptions in the documentation (http://cloudinit.readthedocs.io/en/latest/topics/examples.html#run-commands-on-first-boot):
"runcmd":
#cloud-config
# run commands
# default: none
# runcmd contains a list of either lists or a string
# each item will be executed in order at rc.local like level with
# output to the console
# - runcmd only runs during the first boot
# - if the item is a list, the items will be properly executed as if
# passed to execve(3) (with the first arg as the command).
# - if the item is a string, it will be simply written to the file and
# will be interpreted by 'sh'
#
# Note, that the list has to be proper yaml, so you have to quote
# any characters yaml would eat (':' can be problematic)
runcmd:
- [ ls, -l, / ]
- [ sh, -xc, "echo $(date) ': hello world!'" ]
- [ sh, -c, echo "=========hello world'=========" ]
- ls -l /root
- [ wget, "http://slashdot.org", -O, /tmp/index.html ]
"bootcmd":
#cloud-config
# boot commands
# default: none
# this is very similar to runcmd, but commands run very early
# in the boot process, only slightly after a 'boothook' would run.
# bootcmd should really only be used for things that could not be
# done later in the boot process. bootcmd is very much like
# boothook, but possibly with more friendly.
# - bootcmd will run on every boot
# - the INSTANCE_ID variable will be set to the current instance id.
# - you can use 'cloud-init-per' command to help only run once
bootcmd:
- echo 192.168.1.130 us.archive.ubuntu.com >> /etc/hosts
- [ cloud-init-per, once, mymkfs, mkfs, /dev/vdb ]
also note the "cloud-init-per" command example in bootcmd. From it's help:
Usage: cloud-init-per frequency name cmd [ arg1 [ arg2 [ ... ] ]
run cmd with arguments provided.
This utility can make it easier to use boothooks or bootcmd
on a per "once" or "always" basis.
If frequency is:
* once: run only once (do not re-run for new instance-id)
* instance: run only the first boot for a given instance-id
* always: run every boot
One possibility, although somewhat hackish, is to delete the lock file that cloud-init uses to determine whether or not the user-script has already run. In my case (Amazon Linux AMI), this lock file is located in /var/lib/cloud/sem/ and is named user-scripts.i-7f3f1d11 (the hash part at the end changes every boot). Therefore, the following user-data script added to the end of the Include file will do the trick:
#!/bin/sh
rm /var/lib/cloud/sem/user-scripts.*
I'm not sure if this will have any adverse effects on anything else, but it has worked in my experiments.
please use the below script above your bash script.
example: here m printing hello world to my file
stop instance before adding to userdata
script
Content-Type: multipart/mixed; boundary="//"
MIME-Version: 1.0
--//
Content-Type: text/cloud-config; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cloud-config.txt"
#cloud-config
cloud_final_modules:
- [scripts-user, always]
--//
Content-Type: text/x-shellscript; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="userdata.txt"
#!/bin/bash
/bin/echo "Hello World." >> /var/tmp/sdksdfjsdlf
--//
I struggled with this issue for almost two days, tried all of the solutions I could find and finally, combining several approaches, came up with the following:
MyResource:
Type: AWS::EC2::Instance
Metadata:
AWS::CloudFormation::Init:
configSets:
setup_process:
- "prepare"
- "run_for_instance"
prepare:
commands:
01_apt_update:
command: "apt-get update"
02_clone_project:
command: "mkdir -p /replication && rm -rf /replication/* && git clone https://github.com/awslabs/dynamodb-cross-region-library.git /replication/dynamodb-cross-region-library/"
03_build_project:
command: "mvn install -DskipTests=true"
cwd: "/replication/dynamodb-cross-region-library"
04_prepare_for_apac:
command: "mkdir -p /replication/replication-west && rm -rf /replication/replication-west/* && cp /replication/dynamodb-cross-region-library/target/dynamodb-cross-region-replication-1.2.1.jar /replication/replication-west/replication-runner.jar"
run_for_instance:
commands:
01_run:
command: !Sub "java -jar replication-runner.jar --sourceRegion us-east-1 --sourceTable ${TableName} --destinationRegion ap-southeast-1 --destinationTable ${TableName} --taskName -us-ap >/dev/null 2>&1 &"
cwd: "/replication/replication-west"
Properties:
UserData:
Fn::Base64:
!Sub |
#cloud-config
cloud_final_modules:
- [scripts-user, always]
runcmd:
- /usr/local/bin/cfn-init -v -c setup_process --stack ${AWS::StackName} --resource MyResource --region ${AWS::Region}
- /usr/local/bin/cfn-signal -e $? --stack ${AWS::StackName} --resource MyResource --region ${AWS::Region}
This is the setup for DynamoDb cross-region replication process.
If someone wants to do this on CDK, here's a python example.
For Windows, user data has a special persist tag, but for Linux, you need to use MultiPart User data to setup cloud-init first. This Linux example worked with cloud-config (see ref blog) part type instead of cloud-boothook which requires a cloud-init-per (see also bootcmd) call I couldn't test out (eg: cloud-init-pre always).
Linux example:
# Create some userdata commands
instance_userdata = ec2.UserData.for_linux()
instance_userdata.add_commands("apt update")
# ...
# Now create the first part to make cloud-init run it always
cinit_conf = ec2.UserData.for_linux();
cinit_conf .add_commands('#cloud-config');
cinit_conf .add_commands('cloud_final_modules:');
cinit_conf .add_commands('- [scripts-user, always]');
multipart_ud = ec2.MultipartUserData()
#### Setup to run every time instance starts
multipart_ud.add_part(ec2.MultipartBody.from_user_data(cinit_conf , content_type='text/cloud-config'))
#### Add the commands desired to run every time
multipart_ud.add_part(ec2.MultipartBody.from_user_data(instance_userdata));
ec2.Instance(
self, "myec2",
userdata=multipart_ud,
#other required config...
)
Windows example:
instance_userdata = ec2.UserData.for_windows()
# Bootstrap
instance_userdata.add_commands("Write-Output 'Run some commands'")
# ...
# Making all the commands persistent - ie: running on each instance start
data_script = instance_userdata.render()
data_script += "<persist>true</persist>"
ud = ec2.UserData.custom(data_script)
ec2.Instance(
self, "myWinec2",
userdata=ud,
#other required config...
)
Another approach is to use #cloud-boothook in your user data script. From the docs:
Cloud Boothook
Begins with #cloud-boothook or Content-Type: text/cloud-boothook.
This content is boothook data. It is stored in a file under /var/lib/cloud and then executed immediately.
This is the earliest "hook" available. There is no mechanism provided for running it only one time. The boothook must take care
of this itself. It is provided with the instance ID in the environment
variable INSTANCE_ID. Use this variable to provide a once-per-instance
set of boothook data.

creating a shell script to automate csv export in mongo db

We have mongo db and in that we have a list of collections which i wanna export to csv using the mongoexport tool. I need to do this often and the names of the collections changes sometimes. So what i wanna do is create a shell script that i can just run and it will iterate over the collections in the mongo db and create csv files. Right now i have a script but its not automated for example i have the following in a script.
mongoexport -d mydbname -c mycollname.asdno3rnknlasfkn.collection --csv -f field1,field2,field3,field4 -o mycollname.asdno3rnknlasfkn.collection.csv
In this all the elements will remain same except csv filename and the collection name where both are same.
So i wanna create a script which will
show collections
then loop over the collection names retrieved and replace it in the export tool command.
This can easily be done via the shell - don't know if the comments above refer to old versions of the mongo shell...
Example:
echo 'show collections' | mongo dbname --quiet
You can not call "show collections" through mongo from the shell.
I suggest you write a small skript/program using your favorite language
fetching the collection names through the driver's API and then execute
mongoexport through your script/program using a system call (system()).
#############################################################
Script 1 -- to produce a list of databases in MongoDB server
#############################################################
#!/bin/bash
####################################################################
# PROGRAM: mongolistdbs.sh
#
# PROGRAMMER: REDACTED
# PURPOSE: Program created to list databases in Mongo
#
# CREATED: 02/14/2011
#
# MODIFCATIONS:
###################################################################
#set -x
#supply mongo connection parms: server and port
mongo localhost:12345 --quiet <<!! 2>&1
show dbs
!!
########################################################
Script 2 -- This is the driver that calls script 1
#########################################################
####################################################################
# PROGRAM: mongodb_backup_driver.sh
#
# PROGRAMMER: REDACTED
# PURPOSE: Program created to drive the MongoDB full database
# backups
# CREATED: 02/14/2011
#
# MODIFCATIONS:
###################################################################
#!/bin/bash
################################################
# backup strategy is individual backup
# of all databases via loop and db list
# (dbparm is empty string)
###############################################
echo "Strategy: All via Individual Loop"
####################################
### here is the call of script 1
####################################
DBs=`./mongolistdbs.sh | cut -f1 | egrep -v ">"`
for db in $DBs;
do
echo "Driver loop for backup of [ ${db} ]"
#############################################################
### here im calling my backup script (not supplied)
### with database as a parameter within a loop
### to backup all databases individually
#############################################################
./royall_mongodb_backup.sh ${db}
done

Resources