I am having an issue while I'm trying to deploy a Laravel application onto an Ubuntu Server with Capistrano.
My deployment directory is /var/www/project_stage. When I deploy my project to that directory, everything works just fine. My project becomes live, every single line of code works just as it should.
But when I make a change and deploy a new version of same project, somehow (I'm guessing) my files are getting cached and not responding with the newest release, they are still responding as the old version that alredy being overwritten.
When I deploy the project to a different folder (etc: /var/www/project_stage2 instead of /var/www/project_stage) and change my Nginx config to serve from that folder, it works as it should again. But not in the second deploy to same directory. So I can say that I can deploy to a different directory every time, but I cannot deploy to same directory twice. It always responses as first deploy.
Here's what I've tried:
I checked whether the current directory of Capistrano is linked to
correct folder, which it has.
I checked whether the changes I made are visible on new deploy, which
they are. Files are absolutely changed on new deploy.
I checked whether Nginx is looking for correct release directory, it
has.
I tried to run php artisan cache:clear, route:clear,
view:clear, config:cache commands and I run composer
dump-autoload too. Nothing worked.
I changed Nginx's sendfile parameter to off and restarted, no
result.
I read a similar issue on this question, but it didn't work on
my case.
Here is my deploy.rb:
#deploy_path inherited from staging.rb
lock "~> 3.10.1"
set :application, "project_stage"
set :repo_url, "MY REPO HERE"
set :keep_releases, 10
set :laravel_dotenv_file, "./.env.staging"
namespace :deploy do
before :updated, :easy do
on roles(:all) do |host|
execute :chmod, "-R 777 #{deploy_path}/shared/storage/logs"
execute :chmod, "-R 777 #{deploy_path}/shared/storage/framework"
end
end
after :finished, :hard do
on roles(:all) do |host|
end
end
desc "Build"
after :updated, :build do
on roles(:web) do
within release_path do
execute :php, "artisan clear-compiled"
execute :php, "artisan cache:clear"
execute :php, "artisan view:clear"
execute :php, "artisan route:cache"
execute :php, "artisan config:cache"
end
end
end
end #end deploy namespace
I am using PHP7.0 (FPM with unix socket), Nginx, Laravel5, Capistrano3 (with capsitano/laravel gem), Ubuntu Server 16.4.
The problem you are describing could occur if you are using OPcache with opcache.validate_timestamps set to zero. With validate_timestamps set to zero, OPcache never checks for a newer version of the file. This improves performance slightly, but it means you will need to manually flush the cache.
There are two things you can do to resolve the issue:
Set opcache.validate_timestamps to 1 in your php.ini. This will result in a small performance decrease.
...or flush the cache during your deployment, after the new files have been deployed, by calling opcache_reset() in a PHP script.
Note that because you are using php-fpm, you should be able to flush the cache from the cli. If you were using Apache with mod_php you would need to flush the cache in a script invoked by Apache (through an HTTP request) rather than from the cli. The cache must be flushed in the context that your application runs in.
Related
Currently I have a Ruby and Sinatra project in which I am trying to automate a task. I have followed all the installation steps for the Whenever gem and written a function in my schedule.rb file that looks like this:
set :output, "./cron_log.log"
every 1.minutes do
command "ruby ./scripts/my_script.rb"
end
My project will eventually be deployed to a remote server, but I do not have access to that at the moment. Instead, I am testing it all by running it on my local machine using the command bundle exec shotgun -o 0.0.0.0 -p 9393. When I run it, none of the functions in my schedule.rb file are run. I ran a test by putting require './scripts/my_script.rb' at the top of the schedule.rb file. Whenever I run bundle exec whenever, that outside function will execute, but not the function inside the every block. I figured maybe it only works on a live server, so I ran the bundle exec shotgun -o 0.0.0.0 -p 9393 command to start my server on localhost but nothing in the schedule.rb file was called, including the outside function. I tried putting the command system("bundle exec whenever") in my authentication file, which gets activated every time a user loads the home page of my website, and the outside function in schedule.rb does get called, but the every function does not. Even if it did work like this, I don't want the file to get called every single time a user accesses the home page.
I also tried putting require './config/schedule' in the authentication file and that just completely breaks the website and gives this error message:
Boot Error
Something went wrong while loading config.ru
NoMethodError: undefined method `every' for main:Object
Here is part of the output when running the crontab -l command:
# Begin Whenever generated tasks for: /file_path_redacted/config/schedule.rb at: 2022-10-21 18:50:21 -0500
* * * * * /bin/bash -l -c 'ruby ./scripts/my_script.rb >> ./cron_log.log 2>&1'
# End Whenever generated tasks for: /file_path_redacted/config/schedule.rb at: 2022-10-21 18:50:21 -0500
So, my question is this: How and when is the schedule.rb file supposed to get called? And is it important that I deploy the project to the remote server for the cron job to work? Again, it is not possible for me to deploy it, as it must be fully working before being deployed. If it is possible to run on localhost, what am I doing wrong?
I have seen this answer in many posts but they have not helped me at all. I followed the regular steps to create the laravel project like this:
I cloned from my repository.
I ran composer update.
I added 777 permissions to storage and bootstrap folders.
I have a .env file.
I verfied the .htacces and it's ok.
It is working in locahost, but when I try to replicate it in Hostinger it does not work, it displays the 500 server error. So I wonder what is the problem?
I checked the logs by the way and they were empty. I put the laravel project debugger to true too.
the website url is xellin.com
The debug:
The logs folder:
Thanks.
I think this is a good opportunity to point out how PHP / Laravel / Underlying Server interacts one to each other.
First:
The HTTP server inspects Document Root and .htaccess to get instructions.
If the file is .php (like Laravel), then it CALLS to the php handler.
The php handler could be a FPM version or a Fast CGI version.
-> If an error ocurrs parsing the .htaccess or with the initial interaction between Http Server and PHP... Laravel never runs for real. All ends in a PHP error log
To find out what's wrong, you need to inspect what PHP / Http Server said about the error in their respective logs.
In short words: at this point is not a Laravel error, but a server/php one.
Second:
If Apache/PHP runs well, then PHP executes the Laravel Applicacion Lifecycle... if Laravel encounters a problem, then you will see the usual output error of Laravel Error Handler.
I think this is a must to know to work with web apps in general, because many times developers miss to catch if the problem was with Laravel, or with PHP / Server itself.
As a side note, that's why it is important to know how to choose propper hosting service for Laravel.
Thanks for reading.
You can try to clear cache
Like as
php artisan optimize
Or
You can manually delete cache files which is located in bootstrap folder and inside bootstrap folder you can see cache folder inside cache folder delete all files except git ignore file your issue fix
If you show again this error on live serve then tou can update your composer and then run
php artisan optimize
at first, if you give any of your folders 777 permissions, you are allowing ANYONE to read, write and execute any file in that directory.... what this means is you have given ANYONE (any hacker or malicious person in the entire world) permission to upload ANY file, virus or any other file, and THEN execute that file...so please be careful because IF YOU ARE SETTING YOUR FOLDER PERMISSIONS TO 777 YOU HAVE OPENED YOUR SERVER TO ANYONE THAT CAN FIND THAT DIRECTORY. please read the full explanation from here
the second here is the detailed steps I used to deploy my projects to the server:
run npm run production then update your github repo.
clone the project from GITHUB to server - clone to an outside folder (not public_html folder)
run cd <cloned folder name>
run composer install
run npm install
copy and configure .env file to cloned folder( be sure name is .env not env).
copy all content of cloned_project_folder_name/public to public_html folder
in index.php inside public_html folder edit as below
$app = require_once __DIR__.'/../cloned_project_folder_name/bootstrap/app.php';
require __DIR__.'/../cloned_project_folder_name/vendor/autoload.php';
set your .htaccess properly.
change permission to 755 to index.php and all file in public_html folder
run composer install --optimize-autoloader --no-dev
run php artisan config:cache
run php artisan route:cache
I think I state it all, hope that will help
I'm quite new to laravel and the concept of CI/CD. But I have invested the last 24 hours to get something up and running. Actually I'm using gitlab.com as repo. There I have configured the CI/CD functionality.
The deployments should be done to SRV1 which has configured its corresponding user with a cert. The SRV1 should then clone the necessary files from the gitlab repo by using deployer. The gitlab repo also has the public key from SRV1 user. This chain is working quite good.
The problem is, that after deploying I need to restart php-fpm so that it can reinitialize its symlinks and updates its absolute path cache.
I saw various methodes to overcome this with setting some cgi settings in php-fpm. But these didn't work for me since they all are using nginx, while I'm using apache.
Is there any other way to tell php-fpm with apache to reinitialize its paths or reload after changes?
The method to add the deployer user to the sudoers list and to call service restart php-fpm looks quite hacky to me...
Thanks
UPDATE1:
Actually I found this : https://github.com/lorisleiva/laravel-deployer/blob/master/docs/how-to-reload-fpm.md
It looks like, that deployer has some technique to do this. But this requires the deployer user to have acces to php-fpm reload. Looks a bit unsafe to me.
I didn't found any other solutions. there are some for nginx to tell nginx to always re-evaluate the real path. Obviously for Apache it should be "followSymLink" but it was not working.
Actually I created a bash script which is running under root. this script always check for changes in the "current" symlink every 10 seconds. if there was a change -> reload php-fpm. Not nice, of course quite ugly, but should work.
Still open for other proposals.
I solved this issue on my server by adding a php file that clear APCU & OPCACHE :
<?php
if (in_array(#$_SERVER['REMOTE_ADDR'], ['127.0.0.1', '::1']))
{
apcu_clear_cache();
apcu_clear_cache('user');
apcu_clear_cache('opcode');
opcache_reset();
echo "Cache cleared";
}
else
{
die("You can't clear cache");
}
then you have to call it with curl after you updated your symlink :
/usr/bin/curl --silent https://domain.ext/clear_apc_cache.php
I use Gitlab CI/CD it works now for me
I built a project using Laravel 5 on my dev machine and now I'd like to deploy it.
One solution that came to my mind is to upload everything using FTP but I guess there is a better way.
I uploaded the composer.json but I receive tons of errors.
I have ssh/root access but using GIT is not an option.
Make sure you can use composer binary on your server and you are set
upload every file except vendor folder (you may use some FTPS manager that reads git-ignore file and does not upload ignored files)
set permissions to ./storage folder (browse thru this severfault thread)
make sure your web server root is ./public
create env file (that is not going to be changed ever, until you want) and do not overwrite it with "local" env file.
$ composer install (installs everything from composer.lock)
$ composer update (updates from repositories again, do test on local before updating on production)
I am working to automate the deployment of my Jekyll site. This what the finished product will be:
I push a change from my local machine to my remote git repo.
A hook pulls the files into directory ABC.
guard notices the changes.
guard deletes the files in directory DEF.
guard uses jekyll to build the site to directory DEF.
I have everything set up except for #4. Does the guardfile allow for regular commands like rm? If not could I use guard-rake to call a rakefile that deleted the old content and then ran the Jekyll build?
Thanks.