Initialization script on Elastic Beanstalk instances - bash

I have in AWS some instances that are managed by Beanstalk. But I need to include a script in these instances so that as soon as it gets terminate and rebooted again it runs my script. Where and how can I configure this?

.ebextensions might be what you're looking for. You can set up ebextensions config files to do a lot of things, from package installs and placing files in place to raw bash commands.
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/ebextensions.html?shortFooter=true
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html?shortFooter=true

Related

Can't save a temporary csv file to /home/app directory on heroku using R shiny app [duplicate]

I published my first simple app on Heroku with a free dyno. This app writes a simple .txt file, that seems to be correctly written because my API services are working fine.
But if I try to check this file by entering in the file system using "heroku run bash -a MYAPP", I can't see that file in the folder I thought to see. It is like the file is not existing. Can someone tell me why?
Thanks.
I found this on https://devcenter.heroku.com/articles/active-storage-on-heroku:
In addition, any files stored on disk will not be visible from one-off dynos such as a heroku run bash instance or a scheduler task because these commands use new dynos.
It is still not so clear to me, but at least I know it is a normal (but strange) behaviour of Heroku!

GCE Windows startup Script is not running

I have a simple Django code which I want to keep running on a specific GCE instance. Sometimes the instance gets restarted due to some reasons, not in my control. I created a batch script which I tried to put in Startup folder in both users and common folder. It didn't work. I tried putting the script in using sysprep-specialize-script-url(using cloud storage), sysprep-specialize-script-cmd and sysprep-specialize-script-bat. It didn't work. Here's the content of the batch script -
cd C:\Users\kartik_domadiya\Desktop\happierMiscGoogleCloud
manage.py runserver 0.0.0.0:80
pause
I tried running C:\Program Files\Google\Compute Engine\metadata_scripts\run_startup_scripts.cmd manually and it worked (with any metadata key). So I can see that there's no problem with the script itself.
I even tried with putting the batch script in task scheduler which didn't work too.
So is there any way I can debug the problem and find out why isn't the batch script working? I am using Windows 2012 R2, if that matters.
PS: I know that's a development server and should not be used in production.
I moved the code to C:/code (basically out of any particular user's folder) and then provided all user its access (Right Click > Properties > Security), updated the batch file and put it into startup folder (Run > shell:startup).
It started working after that. I suppose the issue was due to access permission.

Adding dot config and debugging utilities to a docker instance

I've got a project where a Flask server is run as a docker service via docker-compose (other elements like other API servers, the DB, are modeled as separate services in Docker Compose).
In my dev flow there are times when it's useful for me to drop into a bash shell (via docker exec -it <container_id> bash) and do some debugging like poking around at the files in there, take some logs and write some quick scripts to do some transformations on them, etc. In these scenarios I find it would be useful to have things like my bashrc, bash_profile, and various scripts which I find useful to do this sort of thing inside the docker container.
Is there an easy way to package these things and inject them into a (running) container? I'd prefer to not have these various debug things in the main Dockerfile which is shared.
You could make a Dockerfile.debug which uses the actual Dockerfile-image as base. Then grab your bash files into that.
Alternatively, locate the relevant container directory in /var/lib/docker and just put the files there (on the host). A trick to find the correct onion slice is to exec into the container, do a touch hello.txt, and then just find that file on the host.

AWS Script bash on EC instance launch

I would like to automate these steps:
Unzip a zip package (is it possible loading this zip on S3 bucket and downloading it during script? If yes, how?)
Edit apache configuration files (port.conf, /etc/apache2/sites-available/example.com.conf)
Run apt-get commands.
I really do not know how to create a script file to be run on EC2 instance startup.
Could everybody help me, please?
Thank you really much
What you're looking at is User Data, that will give you the possibility to run your script when ec2 instance is launched
When you create your ec2 instance, in step 3 (configure instance details) go to the bottom of the script and click on "Advanced Details". From there you can enter your script.
If you're using a Amazon AMI, the CLI is built in and you can use it, make sure to have ec2 IAM role defined with necessary rights on your AWS resources.
Now in terms of your script, this is vague but roughly:
you would run aws s3 cp s3_file local_file to download a zip file from s3 on the instance, use unzip linux command to unzip the content
Edit your files using sed, cat or >, see this Q&A
run commands with apt-get
Note: you're running the user-data script as root, so you dont need sudo when running commands.

Where can I pull the server host name from if we can't store it in this script?

I noticed someone creating a bunch of scripts to run on GemFire clusters, where they have multiple copies of the same script where the only difference between the scripts is the server name.
Here is a picture of the Github repo
What the script looks like:
#!/bin/bash
source /sys_data/gemfire/scripts/gf-common.env
#----------------------------------------------------------
# Start the servers
#----------------------------------------------------------
(ssh -n <SERVER_HOST_NAME_HERE> ". ${GF_INST_HOME}/scripts/gfsh-server.sh gf_cache1 start")
SERVER_HOST_NAME_HERE = the IP address or server name that the script was designed for, removed for the purposes of this questions.
I would like to create one script with a parameter for the server name. Problem is: I'm not exactly sure where the best location would be to store/retrieve the server ip/host name(s), and let the script reference it, any ideas? The number of cache servers will vary depending on environment, application, and cluster.
Our development pipeline should work like this ideally:
Users commit a file to GitHub repo
Triggers Jenkins job
Jenkins job copies file to each cache server, shuts down that server using the stop_cache.sh script, then runs the start_cache.sh script. The number of cache servers can vary from cluster to cluster.
GemFire cache servers are updated with new file.
Went with the method suggested by #nos
Right now you have them hardcoded in each file it seems. So extract them to a separate file(s), loop through entries in that file and run for host in $(cat cache_hostnames.txt) ; ./stop_cache.sh $host ; done and something similar for other kinds of services?
Placed the server names in a file, and looped through the file.
This project might be of interest:
https://github.com/Pivotal-Data-Engineering/gemfire-manager

Resources