Laravel deployment to apache with php-fpm restart - laravel

I'm quite new to laravel and the concept of CI/CD. But I have invested the last 24 hours to get something up and running. Actually I'm using gitlab.com as repo. There I have configured the CI/CD functionality.
The deployments should be done to SRV1 which has configured its corresponding user with a cert. The SRV1 should then clone the necessary files from the gitlab repo by using deployer. The gitlab repo also has the public key from SRV1 user. This chain is working quite good.
The problem is, that after deploying I need to restart php-fpm so that it can reinitialize its symlinks and updates its absolute path cache.
I saw various methodes to overcome this with setting some cgi settings in php-fpm. But these didn't work for me since they all are using nginx, while I'm using apache.
Is there any other way to tell php-fpm with apache to reinitialize its paths or reload after changes?
The method to add the deployer user to the sudoers list and to call service restart php-fpm looks quite hacky to me...
Thanks
UPDATE1:
Actually I found this : https://github.com/lorisleiva/laravel-deployer/blob/master/docs/how-to-reload-fpm.md
It looks like, that deployer has some technique to do this. But this requires the deployer user to have acces to php-fpm reload. Looks a bit unsafe to me.
I didn't found any other solutions. there are some for nginx to tell nginx to always re-evaluate the real path. Obviously for Apache it should be "followSymLink" but it was not working.
Actually I created a bash script which is running under root. this script always check for changes in the "current" symlink every 10 seconds. if there was a change -> reload php-fpm. Not nice, of course quite ugly, but should work.
Still open for other proposals.

I solved this issue on my server by adding a php file that clear APCU & OPCACHE :
<?php
if (in_array(#$_SERVER['REMOTE_ADDR'], ['127.0.0.1', '::1']))
{
apcu_clear_cache();
apcu_clear_cache('user');
apcu_clear_cache('opcode');
opcache_reset();
echo "Cache cleared";
}
else
{
die("You can't clear cache");
}
then you have to call it with curl after you updated your symlink :
/usr/bin/curl --silent https://domain.ext/clear_apc_cache.php
I use Gitlab CI/CD it works now for me

Related

How to deploy Laravel project in to shared server

I have a Laravel project, I want to deploy it into the server, the thing is that normally we have index.php and .htaccess inside the public folder, but in my case, I have brought these two files into the root. So I want to know, what are the changes needed in serve?
How can I upload this to server?
Solution 1
Somehow you need to get the ssh access as a shared user from your hosting provider and then you can use git to clone your repository into your server.
Solution 2
You can copy paste all of your project into the server using ftp from your cpanel or relevant control panel.
Solution 3
Use Amazon as your hosting as it gives 1 year free tier access, and also gives you ssh service. Follow the solution 1 after getting this.
Put back the files to the public folder. You can change the root path of your server to your project's public folder.
Follow the steps to do:
Go to /etc/apache2/sites-enabled/
Open the .conf file inside this folder
Change the docuementRoot to /etc/var/www/html/project_name/public
Restart the apache server using the following command:
sudo systemctl restart apache2

Laravel ReST API URL 404 not found on AWS EC2 in Apache + mySQL environment - The request URL was not found on this server

This question is on AWS Laravel Implementation on Apache + mySQL AWS EC2 instance.
After copying the working Laravel folder from xampp/htdocs/my_project_name, migration to create tables in mySQL database and seeder are working.
However, I could not connect to my APIs using Postman. (404 not found)
I following these solution links
laravel the requested url was not found on this server
https://laracasts.com/discuss/channels/general-discussion/laravel-5-the-requested-url-was-not-found-on-this-server
I managed to modify the httpd.conf. However, I could not find file .htaccess
(
where .htacces i can find? Sorry for stupid question, but i can't find :) – MilanNz Mar 11 '15 at 12:30
#MilanNz The .htaccess can be found in the public directory of your application. However the code from this answer goes inside a vhost file. The location of that depends on your server. (e.g. for apache2 and unix it's usually at /etc/apache2/sites-available)
)
Also, I was not able to reboot my Apache using "service apache2 restart".
So I "sudo reboot"ed the EC2 instance and reconnected using Postman, the API urls were still not found.
There is a possibility that my URL is wrong. So I attach it here:
The URL used is http://ec2-??-??-???-??.us-east-2.compute.amazonaws.com/my_project_name/public/api/resultCRUD/list
The working xampp URL is http://localhost/my_project_name/public/api/resultCRUD/list
The Laravel project folder is located at /var/www/html/my_project_name on AWS EC2.
http://ec2-??-??-???-??.us-east-2.compute.amazonaws.com/phpinfo.php and
http://ec2-??-??-???-??.us-east-2.compute.amazonaws.com/phpMyAdmin/ are working.
Any help is greatly appreciated. Thanks!
It's finally working.
The reason I was stuck is because most of the answers are for ubuntu while I am using RedHat.
For RedHat EC2 instance, need first change the content of /etc/httpd/conf/httpd.conf following https://pinecode.io/article/setting-up-laravel-56-on-aws-linux,
In this step, I actually changed all "AllowOverride None"s to "AllowOverride All" instead of only line 151 of httpd.conf.
Then need to restart httpd using sudo service httpd restart
following https://gistpages.com/posts/enable_mod_rewrite_in_apache2_on_red_hat_linux
Then it is working all fine.
I didn't restart my apache service after saving changes to httpd.conf when I was asking this question.

Using laradock docker configuration for developing

Hello there we am currently developing a Laravel application. I want all my team members to work locally so we decided to use Docker for our local development environment. I did a little research and there is a project called laradock. After installing it I am supposed to go to http://localhost and the project should run. But I get this:
I am using apache2 and mysql
tl;dr
Go to ./laradock/.env and search for APACHE_DOCUMENT_ROOT then edit that line to this:
APACHE_DOCUMENT_ROOT=/var/www/public
Things to do after the change
For this change to take effect, you have to:
Rebuild the container: docker-compose build apache2
Restart the containers: docker-compose up
Explanation
As mentioned by simonvomeyser on GitHub this is a recent addition which had the same effect as rodion.arr's solution but this way you can leave the original config files untouched and use the .env file to store all your project related configurations. Obviously, since this is a docker config change, you have to rebuild and restart your container, as rodion-arr and 9bits ponted it out in the same thread.
Check you apache configuration (in my case [laradock_folder]/apache2/sites/default.apache.conf file).
You should have DocumentRoot /var/www/public/.
I suppose you have /var/www/ instead

Using NGINX server to deploy a Meteor App from Amazon Linux AMI 2013.09.2 instance receive Module Error

I am attempting to deploy my first web application (a version of Telescope from the MeteorJS framework) via Heroku to a custom subdomain from a Amazon Linux AMI 2013.09.2 instance. I am following along with this tutorial - http://satishgandham.com/2013/12/a-complete-guide-to-install-production-ready-telescope-on-your-own-server/ - but once I attempt to run Telescope using PORT=3000 MONGO_URL=mongodb://localhost:3000/Telescope ROOT_URL=http://ec2-54-193-42-229.us-west-1.compute.amazonaws.com node client/main.js, I receive this error message: Error: Cannot find module '/home/ec2-user/bundle/programs/server/node_modules/fibers/client/main.js'
What I have attempted to do to solve this problem is performed cp || mv on the file main.js which is originally located in the ~/Telescope/client directory over to /home/ec2-user/bundle/programs/server directory and even '/home/ec2-user/bundle/programs/server/node_modules/fibers but I cannot seem to separate main.js from the /client directory. I am not sure if that is the issue or if there is some other underlying problem but I want to find a work around to using a proxy server at this point. I thought that moving the main.js file out of the /client directory was sufficient but apparently not. I am not sure it is imperative for my purposes to continue attempting to use a proxy but if there is a fix, I would not mind learning about it.
Or if any one could direct me on how this - https://github.com/aldeed/deploymeteor/ - could be a potential work-around to using an NGINX server proxy your help would be much appreciated.
You are getting the error because you are not running the command from your home folder.
You were at bundle/programs/server/node_modules/fibers.
Either use absolute path for client/main.js, or cd to ~
MONGO_URL=mongodb://localhost:3000/Telescope ROOT_URL=http://ec2-54-193-42-229.us-west-1.compute.amazonaws.com node client/main.js
PS: It will be helpful for others if you asked the question on the post itself, instead of here

hg serve as Windows service

I'd like to use mercurial on a Windows server. Since I want to pull and push via http, hg serve seems the easiest solution. It works fine, but I have restart it after each reboot, so I need it as a Windows service. Installing it manually with sc create ... didn't work, it created a service that throws an error when I attempt to start it. I found some references to this problem
https://bitbucket.org/tortoisehg/stable/issue/1245/configure-hg-serve-to-run-as-a-windows-service-from
https://bitbucket.org/andrearicossa/hgservice
but they are poorly documented if at all. (Of course, I could install a web server and use hgweb, but it seems even more complicated.) Do you have any experience how to set up easily hg serve ... <many args> as a Windows service?
UPDATE:
Thanks for the different approaches. We stayed with hg serve, the windows-guy at our company managed to install it as a not-quite-proper service.
Using a web server such as apache/lighttpd/iis gives a lot of features such as authentication or HTTPS support. But 'hg serve' is a simple and fast solution. Furthermore, 'hg serve' can serve multiple repositories. But hg serve cannot itself be run as a Windows service because it cannot respond to the Windows control commands. So using HgService is a good solution to make 'hg serve' a real Windows service.
Here is an example of my configuration. I followed following steps:
Install TortoiseHG
Install HgService
Create "C:\Repositories" folder and put needed repos into it.
Create "C:\Repositories\hgweb.config" with following contents:
[paths]
/ = C:\Repositories\*
[web]
style = monoblue
Modify HgService.exe.config in C:\Program Files\Mercurial\HgService
<add key="CommandLine" value="hg.exe"/>
<add key="CommandLineArguments" value="serve --prefix=/hg --address 0.0.0.0 --port 80 --web-conf c:\Repositories\hgweb.config -A access.log -E error.log" />
Start the service
Hope this sequence of action will be helpful to you also.
Or you could use the SCM-Manager
You should check out Jerremy Skinner his blogpost on this subject. He explains how you can host Mercurial repositories on IIS7 and use some nice url-routing.
I did it on my machine and it works like a charm. It takes some configuration, but it's worth it.
1 error in his post I noticed was that he's writing about a hgwebdir.cgi, but I couldn't find that one. I did find a hgweb.cgi, so did the copy-pasting with this file.
An Apache-based alternative: HgServe - Mercurial Repository Server for Windows on Apache
hg serve works fine with NSSM !

Resources