NGINX does not accep files larger then 1MB - laravel

Stack: laravel7.0.8 + nginx 1.14.2
I cannot upload files bigger than 1MB, files less than 1MB are uploaded successfully.
nginx log does not show anything useful
laravel log is empty
In laravel at my controller endpoint I die and dump the validated data. If the file is less than 1 MB I get the dd() printout as expected. If the uploaded file is larger and 1MB no dd() message is displayed, the browser 'flashes' no page reload is initiated.
I tried the following, without success:
In the /etc/nginx/nginx.conf I added client_max_body_size 100M;
Then ran: nginx -s reload && nginx service restart
So my questions:
Is there anything else I can do to change the max body size ?
Are there a way to check any logs that can point in the right direction ?

Solved it by updating the php.ini file on the server.
In the php.ini the following parameters have to be changed:
post_max_size
upload_max_filesize
max_file_upload
If those three are not updated accordingly Laravel does not recognize the file and file related functions do not work.
When running nginx the php.ini file can be found by running the command:
php --ini
On nginx look for php-fpm.conf This is the file which has to be updated.
In order for the changes to be loaded the process has to be reloaded. On Debian the command is:
service php7.3-fpm reload
You will have to see what is the name of your php process. To see all process and find you php process you can use:
service --status-all

Related

Laravel Increase Upload file Size Heroku

It looks like I cannot upload more than 2mb file with heroku.
I can upload 3mb file on my local but I can't upload the same file after pushing to heroku. (Using storage S3)
I updated the htaccess file and I have added
ini_set('upload_max_filesize', '64M');
to my controller but it doesn't work.
Is there a way we can change the php.ini setting on heroku?
you need to create .user.ini file in root directory
post_max_size = 25M
upload_max_filesize = 25M
then also updated .Procfile for apache
web: vendor/bin/heroku-php-apache2 -i user.ini rootDirectoryName/
for more information read this blog and official documention
Had a similar issue, I was googling and trying many solutions, but nothing helped, so I've decided to switch from Apache to NGINX and the problem was solved - https://stackoverflow.com/a/69509235/7092969
Overriding Heroku PHP ini files is kinda complicated and official Herkou documentation did not help me at all :/

Where is the /etc/nginx folder?

I'm getting a 413 Request Entity Too Large error when I try to upload a large image (~1MB) to my Laravel api. The solution everyone gives is to modify the /etc/nginx/nginx.conf file, but I canĀ“t seem to be able to find that file. Where is it exactly located? I'm using Windows 10 and Laravel 6.8.
There is no /etc/nginx/ directory on windows because this is a path in Linux operating system, in windows you either have installed XAMP or WAMP for both of them you need to increase upload_max_filesize and post_max_size in your php.ini file, follow this Article to change php.ini for XAMP and it's very similar to WAMP
"/etc/nginx" is for linux. In Windows maybe you are using Xampp (which uses Apache instead Nginx). You may not even be using Apache because you probably run the application with php artisan serve. I recommend you check php settings in for c:\xampp\php\php.ini (edit upload_max_filesize and post_max_size directives)
More info:
https://www.keycdn.com/support/413-request-entity-too-large

500 issue with Laravel

I have seen this answer in many posts but they have not helped me at all. I followed the regular steps to create the laravel project like this:
I cloned from my repository.
I ran composer update.
I added 777 permissions to storage and bootstrap folders.
I have a .env file.
I verfied the .htacces and it's ok.
It is working in locahost, but when I try to replicate it in Hostinger it does not work, it displays the 500 server error. So I wonder what is the problem?
I checked the logs by the way and they were empty. I put the laravel project debugger to true too.
the website url is xellin.com
The debug:
The logs folder:
Thanks.
I think this is a good opportunity to point out how PHP / Laravel / Underlying Server interacts one to each other.
First:
The HTTP server inspects Document Root and .htaccess to get instructions.
If the file is .php (like Laravel), then it CALLS to the php handler.
The php handler could be a FPM version or a Fast CGI version.
-> If an error ocurrs parsing the .htaccess or with the initial interaction between Http Server and PHP... Laravel never runs for real. All ends in a PHP error log
To find out what's wrong, you need to inspect what PHP / Http Server said about the error in their respective logs.
In short words: at this point is not a Laravel error, but a server/php one.
Second:
If Apache/PHP runs well, then PHP executes the Laravel Applicacion Lifecycle... if Laravel encounters a problem, then you will see the usual output error of Laravel Error Handler.
I think this is a must to know to work with web apps in general, because many times developers miss to catch if the problem was with Laravel, or with PHP / Server itself.
As a side note, that's why it is important to know how to choose propper hosting service for Laravel.
Thanks for reading.
You can try to clear cache
Like as
php artisan optimize
Or
You can manually delete cache files which is located in bootstrap folder and inside bootstrap folder you can see cache folder inside cache folder delete all files except git ignore file your issue fix
If you show again this error on live serve then tou can update your composer and then run
php artisan optimize
at first, if you give any of your folders 777 permissions, you are allowing ANYONE to read, write and execute any file in that directory.... what this means is you have given ANYONE (any hacker or malicious person in the entire world) permission to upload ANY file, virus or any other file, and THEN execute that file...so please be careful because IF YOU ARE SETTING YOUR FOLDER PERMISSIONS TO 777 YOU HAVE OPENED YOUR SERVER TO ANYONE THAT CAN FIND THAT DIRECTORY. please read the full explanation from here
the second here is the detailed steps I used to deploy my projects to the server:
run npm run production then update your github repo.
clone the project from GITHUB to server - clone to an outside folder (not public_html folder)
run cd <cloned folder name>
run composer install
run npm install
copy and configure .env file to cloned folder( be sure name is .env not env).
copy all content of cloned_project_folder_name/public to public_html folder
in index.php inside public_html folder edit as below
$app = require_once __DIR__.'/../cloned_project_folder_name/bootstrap/app.php';
require __DIR__.'/../cloned_project_folder_name/vendor/autoload.php';
set your .htaccess properly.
change permission to 755 to index.php and all file in public_html folder
run composer install --optimize-autoloader --no-dev
run php artisan config:cache
run php artisan route:cache
I think I state it all, hope that will help

Laravel, Linux 2, Centos PostTooLargeException

I have modified the php.ini to increase post_max_size, upload_max_filesize, memory_limit after using php -i | grep php.ini which returns the file location /etc/php.ini
After changing the settings the Apache server is restarted using sudo systemctl restart httpd. However, the error PostTooLargeException persists.
Is there a way to force an error message that will show why the error is persisting?
I have also seen some people saying to edit the .htaccess however my project shows 6 .htaccess files and I am not sure which one would need to be edited
No, there isn't a way to force an error explaining what you want.
To debug it, you should create a php file which runs phpinfo() on your server:
<?php
phpinfo()
There you can check your post_max_size and upload_max_filesize directives are correct.

Weird nginx caching

I'm trying to set up a development server with PuPHPet, which is essentially just a pre-made build of Vagrant with PHP, Nginx and a few other things pre-installed.
I'm having a weird caching issue with my .css files.
When I access my .css file directly at my dev URL, it shows part of the file. This is the file as it was originally before I started editing it. You will notice from my screenshot that I've deleted the entire contents of the file and replaced it with the numbers "12345". When I refresh the .css file in my browser, I see the first 5 characters of the old file. Adding an extra character restores an additional character from the old file.
Restarting nginx does not clear the cache. Ctrl+F5 does not clear the cache. Checking the file contents from vagrant ssh:
[08:11 PM]-[vagrant#precise64]-[/var/www/public/css]-[hg default] B
B$ cat main.css
12345
I can see the file is up to date. The file it's partially displaying simply does not exist. My best guess is it's reading the length of the file on disk, and then pulling the actual contents from memory.
The built-in PHP 5.4 development server does not have this problem, so I'm pretty sure Nginx is the culprit.
How can I get Nginx to behave in a sane fashion?
Most probably it's this know VirtualBox bug with the sendfile system call.
Try disabling sendfile in nginx config:
sendfile off;
(In apache EnableSendfile off)

Resources