How to run gpg from a script run by cron? - bash

I have a script that has a part that looks like that:
for file in `ls *.tar.gz`; do
echo encrypting $file
gpg --passphrase-file /home/$USER/.gnupg/backup-passphrase \
--simple-sk-checksum -c $file
done
For some reason if I run this script manually, works perfectly fine and all files are encrypted. If I run this as cron job, echo $file works fine (I see "encrypting <file>" in the log), but the file doesn't get encrypted and gpg silent fails with no stdout/stderr output.
Any clues?

It turns out that the answer was easier than I expected. There is a --batch parameter missing, gpg tries to read from /dev/tty that doesn't exist for cron jobs. To debug that I have used --exit-on-status-write-error param. But to use that I was inspired by exit status 2, reported by echoing $? as Cd-Man suggested.

In my case gpg cant find home dir for using keys:
gpg: no default secret key: No secret key
gpg: 0003608.cmd: sign+encrypt failed: No secret key
So I added --homedir /root/.gnupg. The final command can looks like
echo 'password' | gpg -vvv --homedir /root/.gnupg --batch --passphrase-fd 0
--output /usr/share/file.gpg --encrypt --sign /usr/share/file.tar.bz2

You should make sure that GPG is in your path when the cronjob is running. Your best guess would be do get the full path of GPG (by doing which gpg) and running it using the full path (for example /usr/bin/gpp...).
Some other debugging tips:
output the value of $? after running GPG (like this: echo "$?"). This gives you the exit code, which should be 0, if it succeded
redirect the STDERR to STDOUT for GPG and then redirect STDOUT to a file, to inspect any error messages which might get printed (you can do this a command line: /usr/bin/gpg ... 2>&1 >> gpg.log)

make sure the user that is running the cron job has the permissions needed to encrypt the file.

I've came across this problem once.
I can't really tell you why, but I dont think cron executes with the same environment variable as the user do.
I actually had to export the good path for my programs to execute well.
Is gpg at least trying to execute?
Or are the files you are trying to encypt actually in the current directory when the cron executes?
Maybe try to execute a echo whereis gpg and echo $PATH in your script to see if it's included... Worked for me.

#skinp Cron jobs are executed by sh, whereas most modern Unixes use bash or ksh for interactive logins. The biggest problem (in my experience) is that sh doesn't understand things like:
export PS1='\u#\h:\w> '
which needs to be changed to:
PS1='\u#\h:\w> '
export PS1
So if cron runs a shell script which defines an environment variable using the first syntax, before running some other command, the other command will never be executed because sh bombs out trying to define the variable.

In my case: "gpg: decryption failed: Bad session key".
Tried adding /usr/bin/gpg, checking the version, setting --batch, setting --home (with /root/.gnupg and /home/user/.gnupg) and all did not work.
/usr/bin/gpg -d --batch --homedir /home/ec2-user/.gnupg --no-mdc-warning -quiet --passphrase "$GPG_PP" "$file"
Turned out that cron on AWS beanstalk instance needed the environment variable being used to set the --passphrase $GPG_PP. Cron now:
0 15 * * * $(source /opt/elasticbeanstalk/support/envvars && /home/ec2-user/bin/script.sh >> /home/ec2-user/logs/cron_out.log 2>&1)

Related

Laravel on EB: postdeploy script fails on /opt/elasticbeanstalk/bin/get-config

I am writing a postdeploy hook on my EB instance. The code below is in a file at .platform/hooks/postdeploy/my_script.sh
#!/bin/sh
# Update RDS_HOSTNAME
host_name=`/opt/elasticbeanstalk/bin/get-config environment -k RDS_HOSTNAME`
echo "RDS_HOSTNAME="'"'$host_name'"' >> /dev/null 2>&1 | sudo tee /var/app/current/aws.env.tmp.config
PROBLEM: The file aws.env.tmp.config has nothing written in it.
NOTE: The same command /opt/elasticbeanstalk/bin/get-config environment -k RDS_HOSTNAME returns the right value when I execute it on console post-deployment.
What am I doing wrong here? Any suggestions are truly appreciated.
Finally figured it out. Eureka!! For anyone else facing the above issue, there are 2 main reasons to look into:
End of Line Sequence: Please set it to LF (specially, if your local
machine is a windows machine, ensure that the EOL is correct)
No redirect to /dev/null
Once I fixed these 2 issues, the new script became:
#!/bin/sh
# Update RDS_HOSTNAME
host_name=`/opt/elasticbeanstalk/bin/get-config environment -k RDS_HOSTNAME`
echo "RDS_HOSTNAME="'"'$host_name'"' | sudo tee /var/app/current/aws.env.tmp.config
This is now working on EB postdeploy as a hook.

Applying sudo to some commands in script

I have a bash script that partially needs to be running with default user rights, but there are some parts that involve using sudo (like copying stuff into system folders) I could just run the script with sudo ./script.sh, but that messes up all file access rights, if it involves creating or modifying files in the script.
So, how can I run script using sudo for some commands? Is it possible to ask for sudo password in the beginning (when the script just starts) but still run some lines of the script as a current user?
You could add this to the top of your script:
while ! echo "$PW" | sudo -S -v > /dev/null 2>&1; do
read -s -p "password: " PW
echo
done
That ensures the sudo credentials are cached for 5 minutes. Then you could run the commands that need sudo, and just those, with sudo in front.
Edit: Incorporating mklement0's suggestion from the comments, you can shorten this to:
sudo -v || exit
The original version, which I adapted from a Python snippet I have, might be useful if you want more control over the prompt or the retry logic/limit, but this shorter one is probably what works well for most cases.
Each line of your script is a command line. So, for the lines you want, you can simply put sudo in front of those lines of your script. For example:
#!/bin/sh
ls *.h
sudo cp *.h /usr/include/
echo "done" >>log
Obviously I'm just making stuff up. But, this shows that you can use sudo selectively as part of your script.
Just like using sudo interactively, you will be prompted for your user password if you haven't done so recently.

Shc encrypted shell script not executable

I created an encrypted shell script with the tool shc.
The script works just fine on my computer but when I transfer it to
another one (solaris 10 to solaris 10) I get the following error:
invalid argument
It's not a permission problem and the encrypted script should be ok I guess it's a header/compiler problem.
The shc command used wasshc -rf <filename> so the script should work on another computer!?
According to The Geek Stuff you need to use the -r option to relax security and -f to specify your script file:
shc -r -f script.sh

bash command doesnt seem to work, but its echo does?

Well, I'm new to linux so this may be a very newbie kinda of thing, here it goes:
I have a script in which I'm trying to send some different jobs to remote computers (in fact Amazon's EC2 instances), these jobs are in fact the same function which I run with different parameters.
eventually in the script code I have this line:
nohup ssh -fqi key.pem ubuntu#${Instance_Id[idx]} $tmp
if I do:
echo nohup ssh -fqi key.pem ubuntu#${Instance_Id[idx]} $tmp
I get:
nohup ssh -fqi key.pem ubuntu#ec2-72-44-41-228.compute-1.amazonaws.com '(nohup ./Script.sh 11 1&)'
Now the weird thing. If I run the code with no echo in the script it doesnt work! it says in the nohup.out (in my laptop, no nohup.out is created in the remote instance) bash: (nohup ./Script.sh 10 1&): No such file or directory
The file does exist locally and remotely and is chmod +x.
If I simply run the very same script with an echo in front of the problematic line and copy its output and paste in the terminal, it works!.
Any clues welcome, thanks!
Try removing the single quotes from $tmp. It looks like bash is treating (nohup ./Script.sh 10 1&) as the command with no parameters, but technically nohup is the command with the parameters ./Script.sh 10 1.
The problem is the single quotes around the nohup command in your $tmp variable. These don't get used on the shell locally, so SSH passes them verbatim. This means remotely the ssh server tries to interpret (nohup ./Script.sh 10 1&) as a command (looks for a file named that) which there clearly isn't. Make sure you remove the single quotes in $tmp.

why my svn backup shell script, works fine in terminal, but fails in crontab?

I have a svn backup script in a redhat linux. let's it called svnbackup.sh
It works fine, when I run it in terminal.
But when I put it into crontab, it will not bring the svnserve back, even the data is backuped correctly.
What's wrong with me???
killall svnserve
tar -zcf /svndir /backup/
svnserve -d -r /svndir
Usually, 'environment' is the problem in a cron job that works when run 'at the terminal' but not when it is run by cron. Most probably, your PATH is not set to include the directory where you keep svnserve.
Either use an absolute pathname for svnserve or set PATH appropriately in the script.
You can debug, in part, by adding a line such as:
env > /tmp/cron.job.env
to your script to see exactly how little environment is set when your cron job is run.
If you are trying to backup a live version of a repository, you probably should be using svnadmin hotcopy. That said, here are a few possibilities that come to mind as to what might be wrong:
You've put each of those statements as separate entries in your crontab (can't tell from the Q).
The svnserve command takes a password, which cron, in turn, cannot supply.
The svnserve command blocks or hangs indefinitely and gets killed by cron.
The command svnserve is not in your PATH in cron.
Assuming that svnserve does not take a password, this might fix the problem:
#! /bin/bash
# backup_and_restart_svnserve.sh
export PATH=/bin:/sbin:/usr/bin:/usr/local/bin # set up your path here
killall svnserve && \
tar -zcf /svndir /backup/ && \
svnserve -d -r /svndir >/dev/null 2>&1 &
Now, use "backup_and_restart_svnserve.sh" as the script to execute. Since it runs in the background, it should hopefully continue running even when cron executes the next task.

Resources