Laravel on EB: postdeploy script fails on /opt/elasticbeanstalk/bin/get-config - laravel

I am writing a postdeploy hook on my EB instance. The code below is in a file at .platform/hooks/postdeploy/my_script.sh
#!/bin/sh
# Update RDS_HOSTNAME
host_name=`/opt/elasticbeanstalk/bin/get-config environment -k RDS_HOSTNAME`
echo "RDS_HOSTNAME="'"'$host_name'"' >> /dev/null 2>&1 | sudo tee /var/app/current/aws.env.tmp.config
PROBLEM: The file aws.env.tmp.config has nothing written in it.
NOTE: The same command /opt/elasticbeanstalk/bin/get-config environment -k RDS_HOSTNAME returns the right value when I execute it on console post-deployment.
What am I doing wrong here? Any suggestions are truly appreciated.

Finally figured it out. Eureka!! For anyone else facing the above issue, there are 2 main reasons to look into:
End of Line Sequence: Please set it to LF (specially, if your local
machine is a windows machine, ensure that the EOL is correct)
No redirect to /dev/null
Once I fixed these 2 issues, the new script became:
#!/bin/sh
# Update RDS_HOSTNAME
host_name=`/opt/elasticbeanstalk/bin/get-config environment -k RDS_HOSTNAME`
echo "RDS_HOSTNAME="'"'$host_name'"' | sudo tee /var/app/current/aws.env.tmp.config
This is now working on EB postdeploy as a hook.

Related

Eval in docker-machine: terminal vs shell script

I'm trying to run a simple shell script to automate changing docker-machine environments. The problem is this, when I run the following command directly in the Mac terminal the following is outputted:
eval $(docker-machine env default)
docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default * digitalocean Running tcp://***.**.***.***:**** v1.12.0
So basically what you would expect, however when I run the following .sh script:
#!/usr/bin/env bash
eval $(docker-machine env default)
The output is:
./run.sh
docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default digitalocean Running tcp://***.**.***.***:**** v1.12.0
So basically, it is not setting it as active and I cannot access it.
Has anyone run into this issue before and knows how to solve it? Seems really strange to me, have got pretty much everything else running and automated apart from this facet.
Cheers, Aaron
I think you need to source your shell script
source ./myscript.sh
as the exports in the eval are being returned to the process you started to run the shell in and then being disposed of. These need to go to the parent e.g. login shell
Consider a.sh
#!/bin/bash
eval $(echo 'export a=123')
export b=234
when run in two ways
$ ./a.sh
$ echo $a
$ echo $b
$ source a.sh
$ echo $a
123
$ echo $b
234
$

grep command works in command line, but not in bash script: get no such file or directory erro

I know that there are some related questions already about this, but can't make it work for me!
So I can run grep in the command line and it works fine, but if I do that on a bash script, I have the following error:
grep: secondword:No such file or directory
I am connecting via ssh to the server, then I run some commands. The path to grep in the server is /bin/grep, but still it does not work. Here is the sample code:
#!/bin/bash
$host="user#host";
ssh $host "
myinfo=\$(grep "word secondword" path/to/file);
"
I also verified that it does not have the CR that is created in Windows with Notepad++. Any ideas on how to fix this?
EDIT:
As suggested, I made the following change with the quotes:
#!/bin/bash
$host="user#host";
ssh $host "
myinfo=\$(grep \"word secondword\" path/to/file);
"
but now I have a very weird behavior: it looks like is listing all the files that are on the home server path.Doing echo to the variable:
file1 file2 file 3
file4 file5 etc.
Why it as this behavior? Did I miss something?
Please Put Script working Dir. While run crontab it will take as user home as default path.
#!/bin/bash
cd Your_Path
$host="user#host"
ssh $host
myinfo=\$(grep "word secondword" path/to/file)

rc.local file not working raspberry pi

This is the contents of my /etc/rc.local file. It is supposed to run on login on my raspberry pi, yet it just logs in in (as I'm using auto login) and then does nothing, i.e. it just sits there with pi#raspberrypi ~$_ waiting for a command. I have no idea why it's not working nor any experience with bash scripts.
It should mount a usb then run a file on said usb but it doesn't.
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
sudo /bin/mount /dev/sda1 /media/robousb
sudo python media/robousb/Robopython/usercode_old.py
exit 0
I assuming you're running Raspbian, which is pretty much Debian.
rc.local runs as root before login, so you don't need or want sudo; it may be causing an error, hence nothing happening.
User-level commands that run for any user when they log in (unlike rc.local, which runs before login) can be put into /etc/bash.bashrc. That may be more applicable to your situation, at least the second command.
Login commands for the pi user only can be put into /home/pi/.bashrc
I don't know raspberry-pi but you could try to write something into a file to see if the file is running or not. For example :
touch /tmp/test.txt
echo "$(date) => It's running" > /tmp/test.txt
If it doesn't work, I know that on some OS (fedora, rhel, centos for example), the path of that file is /etc/init.d/rc.local. It doesn't cost anything to try this path ;)
I have the exact same problem with RPi3/Jessie.
I suggest you to launch your script in the bashrc by doing
sudo emacs /home/pi/.bashrc
In my case i wrote at the EOF:
bash /home/pi/jarvis/jarvis.sh -b &
And that works well at each startup.
I have the same problem. In Raspbian forum is the solution:
Just change the first row from #!/bin/sh -e to
#!/bin/bash
Ivan X is right. You donĀ“t need sudo command.

Bash script: Turn on errors?

After designing a simple shell/bash based backup script on my Ubuntu engine and making it work, I've uploaded it to my Debian server, which outputs a number of errors while executing it.
What can I do to turn on "error handling" in my Ubuntu machine to make it easier to debug?
ssh into the server
run the script by hand with either -v or -x or both
try to duplicate the user, group, and environment of the error run in your terminal window If necessary, run the program with something like "su -c 'sh -v script' otheruser
You might also want to pipe the result of the bad command, particularly if run by cron(8), into /bin/logger, perhaps something like:
sh -v -x badscript 2>&1 | /bin/logger -t badscript
and then go look at /var/log/messages.
Bash lets you turn on debugging selectively, or completely with the set command. Here is a good reference on how to debug bash scripts.
The command set -x will turn on debugging anywhere in your script. Likewise, set +x will turn it off again. This is useful if you only want to see debug output from parts of your script.
Change your shebang line to include the trace option:
#!/bin/bash -x
You can also have Bash scan the file for errors without running it:
$ bash -n scriptname

How to run gpg from a script run by cron?

I have a script that has a part that looks like that:
for file in `ls *.tar.gz`; do
echo encrypting $file
gpg --passphrase-file /home/$USER/.gnupg/backup-passphrase \
--simple-sk-checksum -c $file
done
For some reason if I run this script manually, works perfectly fine and all files are encrypted. If I run this as cron job, echo $file works fine (I see "encrypting <file>" in the log), but the file doesn't get encrypted and gpg silent fails with no stdout/stderr output.
Any clues?
It turns out that the answer was easier than I expected. There is a --batch parameter missing, gpg tries to read from /dev/tty that doesn't exist for cron jobs. To debug that I have used --exit-on-status-write-error param. But to use that I was inspired by exit status 2, reported by echoing $? as Cd-Man suggested.
In my case gpg cant find home dir for using keys:
gpg: no default secret key: No secret key
gpg: 0003608.cmd: sign+encrypt failed: No secret key
So I added --homedir /root/.gnupg. The final command can looks like
echo 'password' | gpg -vvv --homedir /root/.gnupg --batch --passphrase-fd 0
--output /usr/share/file.gpg --encrypt --sign /usr/share/file.tar.bz2
You should make sure that GPG is in your path when the cronjob is running. Your best guess would be do get the full path of GPG (by doing which gpg) and running it using the full path (for example /usr/bin/gpp...).
Some other debugging tips:
output the value of $? after running GPG (like this: echo "$?"). This gives you the exit code, which should be 0, if it succeded
redirect the STDERR to STDOUT for GPG and then redirect STDOUT to a file, to inspect any error messages which might get printed (you can do this a command line: /usr/bin/gpg ... 2>&1 >> gpg.log)
make sure the user that is running the cron job has the permissions needed to encrypt the file.
I've came across this problem once.
I can't really tell you why, but I dont think cron executes with the same environment variable as the user do.
I actually had to export the good path for my programs to execute well.
Is gpg at least trying to execute?
Or are the files you are trying to encypt actually in the current directory when the cron executes?
Maybe try to execute a echo whereis gpg and echo $PATH in your script to see if it's included... Worked for me.
#skinp Cron jobs are executed by sh, whereas most modern Unixes use bash or ksh for interactive logins. The biggest problem (in my experience) is that sh doesn't understand things like:
export PS1='\u#\h:\w> '
which needs to be changed to:
PS1='\u#\h:\w> '
export PS1
So if cron runs a shell script which defines an environment variable using the first syntax, before running some other command, the other command will never be executed because sh bombs out trying to define the variable.
In my case: "gpg: decryption failed: Bad session key".
Tried adding /usr/bin/gpg, checking the version, setting --batch, setting --home (with /root/.gnupg and /home/user/.gnupg) and all did not work.
/usr/bin/gpg -d --batch --homedir /home/ec2-user/.gnupg --no-mdc-warning -quiet --passphrase "$GPG_PP" "$file"
Turned out that cron on AWS beanstalk instance needed the environment variable being used to set the --passphrase $GPG_PP. Cron now:
0 15 * * * $(source /opt/elasticbeanstalk/support/envvars && /home/ec2-user/bin/script.sh >> /home/ec2-user/logs/cron_out.log 2>&1)

Resources