Currently I have a Ruby and Sinatra project in which I am trying to automate a task. I have followed all the installation steps for the Whenever gem and written a function in my schedule.rb file that looks like this:
set :output, "./cron_log.log"
every 1.minutes do
command "ruby ./scripts/my_script.rb"
end
My project will eventually be deployed to a remote server, but I do not have access to that at the moment. Instead, I am testing it all by running it on my local machine using the command bundle exec shotgun -o 0.0.0.0 -p 9393. When I run it, none of the functions in my schedule.rb file are run. I ran a test by putting require './scripts/my_script.rb' at the top of the schedule.rb file. Whenever I run bundle exec whenever, that outside function will execute, but not the function inside the every block. I figured maybe it only works on a live server, so I ran the bundle exec shotgun -o 0.0.0.0 -p 9393 command to start my server on localhost but nothing in the schedule.rb file was called, including the outside function. I tried putting the command system("bundle exec whenever") in my authentication file, which gets activated every time a user loads the home page of my website, and the outside function in schedule.rb does get called, but the every function does not. Even if it did work like this, I don't want the file to get called every single time a user accesses the home page.
I also tried putting require './config/schedule' in the authentication file and that just completely breaks the website and gives this error message:
Boot Error
Something went wrong while loading config.ru
NoMethodError: undefined method `every' for main:Object
Here is part of the output when running the crontab -l command:
# Begin Whenever generated tasks for: /file_path_redacted/config/schedule.rb at: 2022-10-21 18:50:21 -0500
* * * * * /bin/bash -l -c 'ruby ./scripts/my_script.rb >> ./cron_log.log 2>&1'
# End Whenever generated tasks for: /file_path_redacted/config/schedule.rb at: 2022-10-21 18:50:21 -0500
So, my question is this: How and when is the schedule.rb file supposed to get called? And is it important that I deploy the project to the remote server for the cron job to work? Again, it is not possible for me to deploy it, as it must be fully working before being deployed. If it is possible to run on localhost, what am I doing wrong?
Related
I am having an issue while I'm trying to deploy a Laravel application onto an Ubuntu Server with Capistrano.
My deployment directory is /var/www/project_stage. When I deploy my project to that directory, everything works just fine. My project becomes live, every single line of code works just as it should.
But when I make a change and deploy a new version of same project, somehow (I'm guessing) my files are getting cached and not responding with the newest release, they are still responding as the old version that alredy being overwritten.
When I deploy the project to a different folder (etc: /var/www/project_stage2 instead of /var/www/project_stage) and change my Nginx config to serve from that folder, it works as it should again. But not in the second deploy to same directory. So I can say that I can deploy to a different directory every time, but I cannot deploy to same directory twice. It always responses as first deploy.
Here's what I've tried:
I checked whether the current directory of Capistrano is linked to
correct folder, which it has.
I checked whether the changes I made are visible on new deploy, which
they are. Files are absolutely changed on new deploy.
I checked whether Nginx is looking for correct release directory, it
has.
I tried to run php artisan cache:clear, route:clear,
view:clear, config:cache commands and I run composer
dump-autoload too. Nothing worked.
I changed Nginx's sendfile parameter to off and restarted, no
result.
I read a similar issue on this question, but it didn't work on
my case.
Here is my deploy.rb:
#deploy_path inherited from staging.rb
lock "~> 3.10.1"
set :application, "project_stage"
set :repo_url, "MY REPO HERE"
set :keep_releases, 10
set :laravel_dotenv_file, "./.env.staging"
namespace :deploy do
before :updated, :easy do
on roles(:all) do |host|
execute :chmod, "-R 777 #{deploy_path}/shared/storage/logs"
execute :chmod, "-R 777 #{deploy_path}/shared/storage/framework"
end
end
after :finished, :hard do
on roles(:all) do |host|
end
end
desc "Build"
after :updated, :build do
on roles(:web) do
within release_path do
execute :php, "artisan clear-compiled"
execute :php, "artisan cache:clear"
execute :php, "artisan view:clear"
execute :php, "artisan route:cache"
execute :php, "artisan config:cache"
end
end
end
end #end deploy namespace
I am using PHP7.0 (FPM with unix socket), Nginx, Laravel5, Capistrano3 (with capsitano/laravel gem), Ubuntu Server 16.4.
The problem you are describing could occur if you are using OPcache with opcache.validate_timestamps set to zero. With validate_timestamps set to zero, OPcache never checks for a newer version of the file. This improves performance slightly, but it means you will need to manually flush the cache.
There are two things you can do to resolve the issue:
Set opcache.validate_timestamps to 1 in your php.ini. This will result in a small performance decrease.
...or flush the cache during your deployment, after the new files have been deployed, by calling opcache_reset() in a PHP script.
Note that because you are using php-fpm, you should be able to flush the cache from the cli. If you were using Apache with mod_php you would need to flush the cache in a script invoked by Apache (through an HTTP request) rather than from the cli. The cache must be flushed in the context that your application runs in.
I am using God for the first time to monitor my resque and resque-sceduler process.I followed the tutorial on God's home page. According to that if god if there is already a watch added to God by:
sudo god -c /path/to/config.god
then after editing the watch it can be added to God again using the same command. But it does not allow to add it and reports that sock is already in use, I have to manually kill the process and add the watch again. Am I missing something?
I need to add the watch again after every deployment, that is why I am trying to do this.
The page you link to does not actually support your assertion that you reload watches by using the same command that starts god, to wit:
sudo god -c /path/to/config.god
Instead it says to use:
sudo god load path/to/config.god
Specifically, the extracted parts of that page are:
STARTING AND CONTROLLING GOD
To start the god monitoring process as a daemon simply run the god executable passing in the path to the config file (you need to sudo if you're using events on Linux or want to use the setuid/setgid functionality):
$ sudo god -c /path/to/config.god
: : : : :
DYNAMICALLY LOADING CONFIG FILES INTO AN ALREADY RUNNING GOD
God allows you to load or reload configurations into an already running instance. There are a few things to consider when doing this:
Existng Watches with the same name as the incoming Watches will be overidden by the new config.
All paths must be either absolute or relative to the path from which god was started.
To load a config into a running god, issue the following command:
$ sudo god load path/to/config.god
If you're relying on the text:
Ctrl-C out of the foregrounded god instance. Notice that your current simple server will continue to run. Start god again with the same command as before.
then that's only for a foregrounded instance of god, one run with -D. If you CTRL-C that, then god will stop (but the servers it started will continue). If you're god instance is running in the background (no -D), you need to use kill to stop it in the same manner.
I want to run a rake task (migrate) contained in my Rakefile in my Sinatra app. I am using Mina to deploy. rake migrate works great if I run it on the server or on my development, but I cannot get Mina to execute the task.
My current deploy looks like this within config/deploy.rb
task :deploy => :environment do
deploy do
# Put things that will set up an empty directory into a fully set-up
# instance of your project.
invoke :'git:clone'
invoke :'deploy:link_shared_paths'
to :launch do
queue "sudo /opt/nginx/sbin/nginx -s reload"
end
end
end
I tried both queue "rake migrate" and queue "#{rake} migrate" within the deploy block and within the launch block but it always complains bash: command not found
in Mina, use ssh to execute rake not quite a smart move.
mina 'rake[rake_taks:taks_whatever_you_write]' on= environment
that is better.
Mina uses ssh to run remote commands. That means that the commands run in a different environment as when you log in. This is causing problems with rvm and rbenv as they are not initialised properly. Luckily, mina has rvm support, you just have to set it up:
require 'mina/rvm'
task :environment do
invoke :'rvm:use[ruby-1.9.3-p125#gemset_name]'
end
task :deploy => :environment do
...
end
You can do a similar thing for rbenv (documentation)
I'm just starting out with Ruby and with the sinatra framework. I've got a setup going now with heroku and I'm totally amazed how well it works. There is just one thing that I can't figure out. How do I debug stuff? Might sound weird but I have this variable that I'd like to print out and see, preferably in the terminal or something like that. How do I do this in ruby with forman running? When I write print or puts nothing shows upp in the foreman logging...
Thanks!
If you're using Foreman, try adding a log: process to your Procfile. For Rails apps, my Procfile looks like so:
web: bundle exec rails server thin -p $PORT -e $RACK_ENV
log: tail -f -n 0 log/development.log
You'll want to configure Sinatra to log to a file, in my example log/development.log.
Locally, Foreman will automatically spin up a log process and spit the logs out to the terminal, similar to how what you see on Heroku. On Heroku, no log process will be ran unless you manually scale it (which you don't want anyway).
I have a ruby on rails app (1.9.2 and 3.2) running on Heroku with Redis/Resque that requires a rake task be enqueued at regular intervals. Right now I am running 'heroku run rake update_listings' from my local machine once or twice a day....I would like to automate this. I've tried the whenever gem, but the task would not start up in the background. Heroku scheduler seems like the appropriate solution, but I am confused by the scheduler.rb file. I have:
desc "This task is called by the Heroku scheduler add-on"
task :hourly_feed => :environment do
Rake::Task[update_listings].execute
end
When I ran the :hourly_feed task from the Heroku Scheduler console and checked heroku logs, I saw several web dynos get spun up by hirefireapp, but the update_listings rake task was never invoked.
Update: I gave up on resque_scheduler. I am too green to make this work, so trying to use crontab and a sript file. Here is my update.sh script file:
Rake::Task["update_listings"].execute
I set cron using crontab-e and I have it executed every 5 minutes, but I get an error in mail logs:
Projects/livebytransit/update.sh: line 1: Rake::Task[update_listings].execute: command not found
It appears it is finding my update.sh script file and reading it, but it is not executing the code. I noticed the log entry dropped the quotes, so I also tried using single quotes in the shell script file, no change. I also tried changing the update.sh to this:
heroku run rake update_listings
error came back heroku: command not found
Personally, I used resque_scheduler, which will add jobs to the resque / redis queue using cron.
resque_schedule.yml
count_delayed_jobs_job:
cron: "0 */1 * * *"
class: Support::CountDelayedJobsResque
queue: CDJ
args:
description: "count_delayed_jobs_job, every 1hr"
alternatively, you could just chuck Rake::Task["update_listings"].execute in a shell script and use crontab to trigger the job.
It turns out the Heroku Scheduler works perfectly...I simply forgot the quotes around "updated listings".