Laravel - Generic Apache 500 error with Envoyer directory structure - laravel

I am trying to deploy my Laravel 5 site to my VPS using Envoyer. I changed the document root in the site's Apache settings to /current/public (settings below), when I do this I receive a generic Apache 500 error. If I use the old public directory, everything loads properly.
I also tried chmod 777 -R storage, no luck. There are no log entries in the Laravel log, everything deploys fine without errors.
I did notice that if I create a plain HTML document and deploy it via Envoyer, I am able to access it directly with the /current/public document root, anything related to Laravel (and only using current/public), results in the 500.
Ideas? Would a symlink be a possible solution? Oddly, my Forge configuration on my other Envoyer site has the document root set to public, yet there is no symlink to current/public that I can see. It may be set to current/public and just not displaying that for some reason.
customlog:
-
format: combined
target: /usr/local/apache/domlogs/mydomain.org
-
format: "\"%{%s}t %I .\\n%{%s}t %O .\""
target: /usr/local/apache/domlogs/mydomain.org-bytes_log
documentroot: /home/eyf/current/public
group: eyf
hascgi: 1
homedir: /home/eyf
ifmoduleconcurrentphpc: {}
ifmodulemodsuphpc:
group: eyf
ip: MY.IP.ADDR
owner: root
phpopenbasedirprotect: 1
port: 80
scriptalias:
-
path: /home/eyf/public/cgi-bin
url: /cgi-bin/
-
path: /home/eyf/public/cgi-bin/
url: /cgi-bin/
serveradmin: webmaster#mydomain.org
serveralias: www.mydomain.org
servername: mydomain.org
usecanonicalname: 'Off'
user: eyf
userdirprotect: ''

Okay, so I encountered two separate problems here.
The first problem was the fact that I was deploying code as root and trying to access a site owned by a cPanel user (eyf in this case). Because the files/directories were deployed as root, an ownership issue caused the generic 500 error page.
I then tried to connect via Envoyer with eyf and there was some sort of SSH key issue - even though I added the key to eyf via cPanel, it did not seem to take. Repeated attempts to connect from Envoyer eventually lead the IP address to be blacklisted.
In response to this, Envoyer simply said "Failed" when trying to connect to the server. Immediately after saying "Failed," a warning message would appear saying that there was a problem with PHP-FPM.
Taylor says that this PHP-FPM warning message appears because the connection was unsuccessful and Envoyer could not connect to PHP-FPM. Well, this is totally misleading because I do not have PHP-FPM installed on this server and it has absolutely nothing to do with why the connection failed (it was an SSH authentication problem).
I asked him to please improve the warnings/errors for things like this, it stretched what should have been a quick fix into a several hour long tail chasing session. Dploy.io, a competitor, clearly showed an SSH connection issue when I first attempted to connect and had forgot the SSH key - "d'oh! Let me fix that," problem solved in less than a minute.
Anyway, back to Envoyer bliss - just a bit ticked. ;) The IP addresses were whitelisted, I added the SSH key manually for the cPanel user (/.ssh/id_rsa), and now everything works.

Related

Laravel deployment to apache with php-fpm restart

I'm quite new to laravel and the concept of CI/CD. But I have invested the last 24 hours to get something up and running. Actually I'm using gitlab.com as repo. There I have configured the CI/CD functionality.
The deployments should be done to SRV1 which has configured its corresponding user with a cert. The SRV1 should then clone the necessary files from the gitlab repo by using deployer. The gitlab repo also has the public key from SRV1 user. This chain is working quite good.
The problem is, that after deploying I need to restart php-fpm so that it can reinitialize its symlinks and updates its absolute path cache.
I saw various methodes to overcome this with setting some cgi settings in php-fpm. But these didn't work for me since they all are using nginx, while I'm using apache.
Is there any other way to tell php-fpm with apache to reinitialize its paths or reload after changes?
The method to add the deployer user to the sudoers list and to call service restart php-fpm looks quite hacky to me...
Thanks
UPDATE1:
Actually I found this : https://github.com/lorisleiva/laravel-deployer/blob/master/docs/how-to-reload-fpm.md
It looks like, that deployer has some technique to do this. But this requires the deployer user to have acces to php-fpm reload. Looks a bit unsafe to me.
I didn't found any other solutions. there are some for nginx to tell nginx to always re-evaluate the real path. Obviously for Apache it should be "followSymLink" but it was not working.
Actually I created a bash script which is running under root. this script always check for changes in the "current" symlink every 10 seconds. if there was a change -> reload php-fpm. Not nice, of course quite ugly, but should work.
Still open for other proposals.
I solved this issue on my server by adding a php file that clear APCU & OPCACHE :
<?php
if (in_array(#$_SERVER['REMOTE_ADDR'], ['127.0.0.1', '::1']))
{
apcu_clear_cache();
apcu_clear_cache('user');
apcu_clear_cache('opcode');
opcache_reset();
echo "Cache cleared";
}
else
{
die("You can't clear cache");
}
then you have to call it with curl after you updated your symlink :
/usr/bin/curl --silent https://domain.ext/clear_apc_cache.php
I use Gitlab CI/CD it works now for me

Laravel ReST API URL 404 not found on AWS EC2 in Apache + mySQL environment - The request URL was not found on this server

This question is on AWS Laravel Implementation on Apache + mySQL AWS EC2 instance.
After copying the working Laravel folder from xampp/htdocs/my_project_name, migration to create tables in mySQL database and seeder are working.
However, I could not connect to my APIs using Postman. (404 not found)
I following these solution links
laravel the requested url was not found on this server
https://laracasts.com/discuss/channels/general-discussion/laravel-5-the-requested-url-was-not-found-on-this-server
I managed to modify the httpd.conf. However, I could not find file .htaccess
(
where .htacces i can find? Sorry for stupid question, but i can't find :) – MilanNz Mar 11 '15 at 12:30
#MilanNz The .htaccess can be found in the public directory of your application. However the code from this answer goes inside a vhost file. The location of that depends on your server. (e.g. for apache2 and unix it's usually at /etc/apache2/sites-available)
)
Also, I was not able to reboot my Apache using "service apache2 restart".
So I "sudo reboot"ed the EC2 instance and reconnected using Postman, the API urls were still not found.
There is a possibility that my URL is wrong. So I attach it here:
The URL used is http://ec2-??-??-???-??.us-east-2.compute.amazonaws.com/my_project_name/public/api/resultCRUD/list
The working xampp URL is http://localhost/my_project_name/public/api/resultCRUD/list
The Laravel project folder is located at /var/www/html/my_project_name on AWS EC2.
http://ec2-??-??-???-??.us-east-2.compute.amazonaws.com/phpinfo.php and
http://ec2-??-??-???-??.us-east-2.compute.amazonaws.com/phpMyAdmin/ are working.
Any help is greatly appreciated. Thanks!
It's finally working.
The reason I was stuck is because most of the answers are for ubuntu while I am using RedHat.
For RedHat EC2 instance, need first change the content of /etc/httpd/conf/httpd.conf following https://pinecode.io/article/setting-up-laravel-56-on-aws-linux,
In this step, I actually changed all "AllowOverride None"s to "AllowOverride All" instead of only line 151 of httpd.conf.
Then need to restart httpd using sudo service httpd restart
following https://gistpages.com/posts/enable_mod_rewrite_in_apache2_on_red_hat_linux
Then it is working all fine.
I didn't restart my apache service after saving changes to httpd.conf when I was asking this question.

Apache httpd got 403 forbidden on MacOS High Sierra

I tried to install Apache HTTPD via Homebrew with brew install apache2, it worked well with default configuration http://localhost:8080
However, once I added more virtual host, to another folder (actually just clone the www folder to new one), and then tried to access to that new virtual host, I got 403 Forbidden error.
I don't think there was any wrong configuration, because it worked well with Apache2 on Ubuntu, but don't know why it's broken on MacOS, even I changed permission to 777 for that new www folder
Thanks
I hit the same wall, so I'd like to share my experience and hope it can help.
First of all, look at the error message in the log file that you specified for your virtual host.
If you see something like:
[authz_core:error] [pid 57233] [client 127.0.0.1:55693] AH01630: client denied by server configuration: /opt/local/www/your_vhost/
which indicates the module authz_core cause the forbidden. In this case, add Require all granted in your VirtualHost block can solve the problem.
There are different modules can cause the same problem, so you'd better to see the message in the log first.
The Apache2 error log helped me solved the mystery. It turned out that 127.0.0.1 is pointed to a VH with folder location in where there is no index.html. I thought it was pointing to my Docroot as in the case of the localhost.

Lets Encrypt Error "urn:acme:error:unauthorized"

I use Lets Encrypt and get error:
urn:acme:error:unauthorized :: The client lacks sufficient authorization :: Error parsing key authorization file: Invalid key authorization: malformed token
I try: sudo service nginx stop
but get error: nginx service not loaded
So I had a lot of trouble with this stuff. Basically, the error means that certbot was unable to find the file it was looking for when testing that you owned the site. This has a number of potential causes, so I'll try to summarize because I encountered most of them when I set this up. For more reference material, I found the github readme much more useful than the docs.
First thing to note is that the nginx service needs to be running for the acme authorization to work. It looks like you're saying it's not, so start by spinning that up.
sudo service nginx start
With that going, everything here is based on the file location of the website you're trying to create a certificate for. If you don't know where that is, it will be in the relevant configuration file under /etc/nginx which depends largely on your version of NGINX, but is usually under /etc/nginx/nginx.conf or /etc/nginx/sites-enabled/[site-name] or /etc/nginx/conf/[something].conf. Note that the configuration file should be listed (or at least it's directory) under /etc/nginx/nginx.conf so you might start there.
This is an important folder, because this is the folder that certbot needs to modify. It needs to create some files in a nested folder structure that the URL it tries to read from returns the data from those files. The folder it tries to create will be under the root directory you give it under the folder:
/.well-known/acme-challenge
It will then try to create a file with an obscure name (I think it's a GUID), and read that file from the URL. Something like:
http://example.com/.well-known/acme-challenge/abcdefgh12345678
This is important, because if your root directory is poorly configured, the url will not match the folder and the authorization will fail. And if certbot does not have write permissions to the folders when you run it, the file will not be created, so the authorization will fail. I encountered both of these issues.
Additionally, you may have noticed that the above URL is http not https. This is also important. I was using an existing encryption tool, so I had to configure NGINX to allow me to view the ./well-known folder tree under port 80 instead of 443 while still keeping most of my data under the secure https url. These two things make for a somewhat complicated NGINX file, so here is an example configuration to reference.
server {
listen 80;
server_name example.com;
location '/.well-known/acme-challenge' {
default_type "text/plain";
root /home/example;
}
location '/' {
return 301 https://$server_name$request_uri;
}
}
This allows port 80 for everything related to the certbot challenges, while retaining security for the rest of my website. You can modify the directory permissions to ensure that certbot has access to write the files, or simply run it as root:
sudo ./certbot-auto certonly
After you get the certificate, you'll have to set it up in your config as well, but that's outside the scope of this question, so here's a link.

Using NGINX server to deploy a Meteor App from Amazon Linux AMI 2013.09.2 instance receive Module Error

I am attempting to deploy my first web application (a version of Telescope from the MeteorJS framework) via Heroku to a custom subdomain from a Amazon Linux AMI 2013.09.2 instance. I am following along with this tutorial - http://satishgandham.com/2013/12/a-complete-guide-to-install-production-ready-telescope-on-your-own-server/ - but once I attempt to run Telescope using PORT=3000 MONGO_URL=mongodb://localhost:3000/Telescope ROOT_URL=http://ec2-54-193-42-229.us-west-1.compute.amazonaws.com node client/main.js, I receive this error message: Error: Cannot find module '/home/ec2-user/bundle/programs/server/node_modules/fibers/client/main.js'
What I have attempted to do to solve this problem is performed cp || mv on the file main.js which is originally located in the ~/Telescope/client directory over to /home/ec2-user/bundle/programs/server directory and even '/home/ec2-user/bundle/programs/server/node_modules/fibers but I cannot seem to separate main.js from the /client directory. I am not sure if that is the issue or if there is some other underlying problem but I want to find a work around to using a proxy server at this point. I thought that moving the main.js file out of the /client directory was sufficient but apparently not. I am not sure it is imperative for my purposes to continue attempting to use a proxy but if there is a fix, I would not mind learning about it.
Or if any one could direct me on how this - https://github.com/aldeed/deploymeteor/ - could be a potential work-around to using an NGINX server proxy your help would be much appreciated.
You are getting the error because you are not running the command from your home folder.
You were at bundle/programs/server/node_modules/fibers.
Either use absolute path for client/main.js, or cd to ~
MONGO_URL=mongodb://localhost:3000/Telescope ROOT_URL=http://ec2-54-193-42-229.us-west-1.compute.amazonaws.com node client/main.js
PS: It will be helpful for others if you asked the question on the post itself, instead of here

Resources