Setting env variable to a cron scheduled task using whenever gem - ruby

In my code below, I wanted to set few environment variables stored in a file. Am I missing something? Printing env in production after 'bundle exec whenever' does not show the environment variables set. Using whenever gem for a scheduled cron task and spent hours figuring this. Any other way can be suggested too.
every 1.day, :at => '2:30 am' do
# Run shell script to assign variables and continue the rake task
system "for line in `cat config/myEnvFile.env` ; do export $line ; done"
rake "task:continue_doing_my_task"
end

system is not a whenever job type. It's Kernel.system, which executes the String being passed to it when the whenever command is run, rather than converting that String to cron syntax. It looks like what you really mean is:
command "for line in `cat config/myEnvFile.env` ; do export $line ; done"
# Note: command instead of system
command is a built-in job type defined by whenever here.
Each line of code inside the every-block runs as it's own command. If you run whenever (without any arguments, so it just displays what it would put in your crontab without actually modifying the crontab, and after making the correction I describe above), you'll see that the output is something like this:
30 2 * * * * /bin/bash -l -c 'for line in `cat config/myEnvFile.env` ; do export $line ; done'
30 2 * * * * /bin/bash -l -c 'cd /path/to/project && RAILS_ENV=production bundle exec rake task:continue_doing_my_task --silent > my_log_file.log 2&>1'
Notice 2 issues:
Firstly, these 2 commands have nothing to do with each other--they are run as 2 totally separate processes.
The first one is running in cron's default directory, which is probably not where config/myEnvFile.env is located.
To fix this, you need to merge everything into a single command. By using whenever's rake job type, you will end up in the right directory, but you still to export all those variables somehow.
One way to do this, is to rename the file .ruby-env and use rvm. rvm, in addition to managing ruby versions for you, will automatically load all environment variables defined in .ruby-env when you enter the directory.
If RVM is not an option for you, or you want something more lightweight, rename the file .env and use dotenv. Their README documents exactly how to use the gem, with or without Rails. Without Rails, it's this easy:
Add dotenv to your Gemfile
Make this change to your Rakefile:
require 'dotenv/tasks' # 1. require this file
namespace :task
task continue_doing_my_task: :dotenv do # 2. make :dotenv a prerequisite for your task

Related

How can I automate a script [Ruby] to run at a given time every day?

I've written a webcrawler that pulls information into a report and would like to run it every day at 12:00pm. The script is run using:
ruby script.rb
I've tried using the whenever gem (https://github.com/javan/whenever).
My directory structure is this:
/config
schedule.rb
script.rb
In my script.rb file, I have the following:
every :day, :at => '12:00pm' do
command "ruby script.rb"
end
I've modified the time :at to take see if it runs and it doesn't.
I've also tried:
every :day, :at => '12:00pm' do
`ruby script.rb`
end
I've also looked into the "at" linux utility but it appears suited to one-time jobs. I'd like this to generate a report everyday.
Note: the script specifies where to output so I don't need to give it an output.
I've also tried creating a crontab but have encountered a problem with saving.
I use http://crontab-generator.org/ to generate the correct syntax.
Then I run:
crontab -e
Which opens vi and I copy the syntax. However, it exits with a status of 1 and if I run:
crontab -l
It says there's no jobs listed.
I've also tried running this as the super user, and it exits the same.
The error message is
/usr/bin/vi" exited with status 1
I just want a command to run at a given time, what am I missing?
Edit
Does it matter that I'm on a Mac?

Cron (hourly) task to execute a ruby script

I have a ruby script that I have tested to be working that I would like to run as an hourly cron but cannot seem to get it firing properly.
The last thing I have tried was placing the line:
ruby ~/ruby_script.rb
in /etc/cron.hourly
Said ruby script is located in the home directory with:
#!/usr/bin/env ruby
as its top line.
I have looked into ruby & cron resources but they most seem to be for reoccurring tasks in a Ruby on Rails environment when I just want the script to run in my ubuntu environment. I have double checked that rails is installed as well.
I have had a lot of fun learning more about ubuntu over the past few months and will truly appreciate any assistance I receive here. Thank you in advance.
Try to use current user crontab
$ crontab -e
Add new cron job
0 * * * * /bin/bash -l -c 'ruby ~/ruby_script.rb'
Using bash command to run helps to get env variables during script execution.
(for tests change 0 to * - script will try to run every minute)
To log errors you can try to add command:
0 * * * * /bin/bash -l -c 'ruby ~/ruby_script.rb >> ~/ruby_script.log 2>&1 &'
Hope it helps.

Monitoring Ruby script, using Monit - Including RVM

Im using Monit to monitor a ruby script that uses Ruby daemons gem, which launches a separate process with PID - following the instructions from Monitor ruby processes with Monit
In order to execute the ruby script I need to include RVM in the Monit start and stop strings, so I have access to all the gems.
However when .monitrc executes I get the following error:
$rvm_path (/usr/local/rvm) does not exist./home/william/.rvm/scripts/rvm: line 174: rvm_is_a_shell_function: command not found
/home/william/.rvm/scripts/rvm: line 185: __rvm_teardown: command not found
'myserver_1' failed to start
Aborting event
I added PATH=$PATH:/home/william/.rvm/bin && . /home/william/.rvm/scripts/rvm to the start and stop command strings to try and include RVM. However still it doesn't work
Config file .monitrc:
....
check process myserver_1
with pidfile /home/william/ruby/barclays/myapp.rb.pid
start = "/bin/bash -c 'PATH=$PATH:/home/william/.rvm/bin && . /home/william/.rvm/scripts/rvm && ruby /home/william/ruby/barclays/daemonloader.rb start'"
stop = "/bin/bash -c 'PATH=$PATH:/home/william/.rvm/bin && . /home/william/.rvm/scripts/rvm && ruby /home/william/ruby/barclays/daemonloader.rb stop'"
....
Thanks for your help.
EDIT
Ive got a feeling the problem is related to environment variables. Quoting from this page
You should also know that for security reasons Monit purges the
environment and only sets a spartan PATH variable that contains /bin,
/usr/bin, /sbin and /usr/sbin. If your program or script dies, the
reason could be that it expects certain environment variables or to
find certain programs via PATH. If this is the case you should set the
environment variables you need directly in the start or stop script
called by monit.
Finally, Monit uses the system call execv to execute a program or a
script. This means that you cannot write shell commands directly in
the start, stop or exec statements. To do this, you must do as above;
start a shell and issue your commands there. For example:
start program = "/bin/bash -c 'my shell command && my other
command'"
Use this:
/path/to/rvm/bin/rvm in /path/to/project do ...
Replace the paths with proper directories for rvm and project and the ... with the commands to stop/start - try:
/usr/bin/env "HOME=/home/william rvm_path=/home/william/.rvm
/home/william/.rvm/bin/rvm in /home/william/ruby/project do
ruby daemonloader.rb start"
This command will load RVM, cd into project path, load ruby for this ruby and execute given command.
You could try something like this in Monit.
start = "/bin/su - william -c 'cd /home/william/ruby/project && ~/.rvm/bin/rvm default do bundle exec ruby daemonloader.rb start'"
This worked for me.
Mentioning the gemset and ruby source solves the problem for me.
start program = "/bin/bash -c 'cd /home/project_path && source /home/user/.rvm/environments/ruby-2.4.2#global && RAILS_ENV=production bundle exec rails s'"

Mysqldump creates empty file when run via cron on linux

I have a bash script mysql_cron.sh that runs mysqldump
#!/bin/bash
/usr/local/mysql/bin/mysqldump -ujoe -ppassword > /tmp/somefile
This works fine. I then call it from cron:
20 * * * * /home/joe/mysql_cron.sh
and this creates the file /tmp/somefile, but the file is always empty. I have tried adding a
source /home/joe/.bash_profile
to the script to make sure cron has the right env variables, but that doesn't help. I see many other people having this problem but have found no solution. I've also tried the '>' operator in the crontab to cat any cron errors to a file, but that doesn't seem to generate any errors. Any troubleshooting ideas welcomed. Thanks!
Add output of error information to file (as Damp has said), so that you can check if there is any error:
#!/bin/bash
/usr/local/mysql/bin/mysqldump -ujoe -ppassword > /tmp/somefile 2>&1
You can also take a look at MySQL's log files at /var/log in case there is some hint there.
Add this line to your script and compare the result between running it from cron versus running it directly:
env > /tmp/env.$$.out
The $$ will be replaced in the resulting filename by the PID of the parent process (cron or the shell). You should be able to diff the two files and see if anything significant is different between the two environments.

How to simulate the environment cron executes a script with?

I normally have several problems with how cron executes scripts as they normally don't have my environment setup. Is there a way to invoke bash(?) in the same way cron does so I could test scripts before installing them?
Add this to your crontab (temporarily):
* * * * * env > ~/cronenv
After it runs, do this:
env - `cat ~/cronenv` /bin/sh
This assumes that your cron runs /bin/sh, which is the default regardless of the user's default shell.
Footnote: if env contains more advanced config, eg PS1=$(__git_ps1 " (%s)")$, it will error cryptically env: ": No such file or directory.
Cron provides only this environment by default :
HOME user's home directory
LOGNAME user's login
PATH=/usr/bin:/usr/sbin
SHELL=/usr/bin/sh
If you need more you can source a script where you define your environment before the scheduling table in the crontab.
Couple of approaches:
Export cron env and source it:
Add
* * * * * env > ~/cronenv
to your crontab, let it run once, turn it back off, then run
env - `cat ~/cronenv` /bin/sh
And you are now inside a sh session which has cron's environment
Bring your environment to cron
You could skip above exercise and just do a . ~/.profile in front of your cron job, e.g.
* * * * * . ~/.profile; your_command
Use screen
Above two solutions still fail in that they provide an environment connected to a running X session, with access to dbus etc. For example, on Ubuntu, nmcli (Network Manager) will work in above two approaches, but still fail in cron.
* * * * * /usr/bin/screen -dm
Add above line to cron, let it run once, turn it back off. Connect to your screen session (screen -r). If you are checking the screen session has been created (with ps) be aware that they are sometimes in capitals (e.g. ps | grep SCREEN)
Now even nmcli and similar will fail.
You can run:
env - your_command arguments
This will run your_command with empty environment.
Depending on the shell of the account
sudo su
env -i /bin/sh
or
sudo su
env -i /bin/bash --noprofile --norc
From http://matthew.mceachen.us/blog/howto-simulate-the-cron-environment-1018.html
Answering six years later: the environment mismatch problem is one of the problems solved by systemd "timers" as a cron replacement. Whether you run the systemd "service" from the CLI or via cron, it receives exactly the same environment, avoiding the environment mismatch problem.
The most common issue to cause cron jobs to fail when they pass manually is the restrictive default $PATH set by cron, which is this on Ubuntu 16.04:
"/usr/bin:/bin"
By contrast, the default $PATH set by systemd on Ubuntu 16.04 is:
"/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
So there's already a better chance that a systemd timer is going to find a binary without further hassle.
The downside with systemd timers, is there's a slightly more time to set them up. You first create a "service" file to define what you want to run and a "timer" file to define the schedule to run it on and finally "enable" the timer to activate it.
Create a cron job that runs env and redirects stdout to a file.
Use the file alongside "env -" to create the same environment as a cron job.
Don't forget that since cron's parent is init, it runs programs without a controlling terminal. You can simulate that with a tool like this:
http://libslack.org/daemon/
By default, cron executes its jobs using whatever your system's idea of sh is. This could be the actual Bourne shell or dash, ash, ksh or bash (or another one) symlinked to sh (and as a result running in POSIX mode).
The best thing to do is make sure your scripts have what they need and to assume nothing is provided for them. Therefore, you should use full directory specifications and set environment variables such as $PATH yourself.
The accepted answer does give a way to run a script with the environment cron would use. As others pointed out, this is not the only needed criteria for debugging cron jobs.
Indeed, cron also uses a non-interactive terminal, without an attached input, etc.
If that helps, I have written a script that enables painlessly running a command/script as it would be run by cron. Invoke it with your command/script as first argument and you're good.
This script is also hosted (and possibly updated) on Github.
#!/bin/bash
# Run as if it was called from cron, that is to say:
# * with a modified environment
# * with a specific shell, which may or may not be bash
# * without an attached input terminal
# * in a non-interactive shell
function usage(){
echo "$0 - Run a script or a command as it would be in a cron job, then display its output"
echo "Usage:"
echo " $0 [command | script]"
}
if [ "$1" == "-h" -o "$1" == "--help" ]; then
usage
exit 0
fi
if [ $(whoami) != "root" ]; then
echo "Only root is supported at the moment"
exit 1
fi
# This file should contain the cron environment.
cron_env="/root/cron-env"
if [ ! -f "$cron_env" ]; then
echo "Unable to find $cron_env"
echo "To generate it, run \"/usr/bin/env > /root/cron-env\" as a cron job"
exit 0
fi
# It will be a nightmare to expand "$#" inside a shell -c argument.
# Let's rather generate a string where we manually expand-and-quote the arguments
env_string="/usr/bin/env -i "
for envi in $(cat "$cron_env"); do
env_string="${env_string} $envi "
done
cmd_string=""
for arg in "$#"; do
cmd_string="${cmd_string} \"${arg}\" "
done
# Which shell should we use?
the_shell=$(grep -E "^SHELL=" /root/cron-env | sed 's/SHELL=//')
echo "Running with $the_shell the following command: $cmd_string"
# Let's route the output in a file
# and do not provide any input (so that the command is executed without an attached terminal)
so=$(mktemp "/tmp/fakecron.out.XXXX")
se=$(mktemp "/tmp/fakecron.err.XXXX")
"$the_shell" -c "$env_string $cmd_string" >"$so" 2>"$se" < /dev/null
echo -e "Done. Here is \033[1mstdout\033[0m:"
cat "$so"
echo -e "Done. Here is \033[1mstderr\033[0m:"
cat "$se"
rm "$so" "$se"
Another simple way I've found (but may be error prone, I'm still testing) is to source your user's profile files before your command.
Editing a /etc/cron.d/ script:
* * * * * user1 comand-that-needs-env-vars
Would turn into:
* * * * * user1 source ~/.bash_profile; source ~/.bashrc; comand-that-needs-env-vars
Dirty, but it got the job done for me. Is there a way to simulate a login? Just a command you could run? bash --login didn't work. It sounds like that would be the better way to go though.
EDIT: This seems to be a solid solution: http://www.epicserve.com/blog/2012/feb/7/my-notes-cron-directory-etccrond-ubuntu-1110/
* * * * * root su --session-command="comand-that-needs-env-vars" user1 -l
Answer https://stackoverflow.com/a/2546509/5593430 shows how to obtain the cron environment and use it for your script. But be aware that the environment can differ depending on the crontab file you use. I created three different cron entries to save the environment via env > log. These are the results on an Amazon Linux 4.4.35-33.55.amzn1.x86_64.
1. Global /etc/crontab with root user
MAILTO=root
SHELL=/bin/bash
USER=root
PATH=/sbin:/bin:/usr/sbin:/usr/bin
PWD=/
LANG=en_US.UTF-8
SHLVL=1
HOME=/
LOGNAME=root
_=/bin/env
2. User crontab of root (crontab -e)
SHELL=/bin/sh
USER=root
PATH=/usr/bin:/bin
PWD=/root
LANG=en_US.UTF-8
SHLVL=1
HOME=/root
LOGNAME=root
_=/usr/bin/env
3. Script in /etc/cron.hourly/
MAILTO=root
SHELL=/bin/bash
USER=root
PATH=/sbin:/bin:/usr/sbin:/usr/bin
_=/bin/env
PWD=/
LANG=en_US.UTF-8
SHLVL=3
HOME=/
LOGNAME=root
Most importantly PATH, PWD and HOME differ. Make sure to set these in your cron scripts to rely on a stable environment.
In my case, cron was executing my script using sh, which fail to execute some bash syntax.
In my script I added the env variable SHELL:
#!/bin/bash
SHELL=/bin/bash
I don't believe that there is; the only way I know to test a cron job is to set it up to run a minute or two in the future and then wait.

Resources