I have an API that I can export multiple data with relations, but I just buy a new server, and in this server when I make request with a big json it reachs the route file twice and returns Unauthenticated (it should'nt cause its just one request and I don't make any redirects), with a smaller json it doesn't happens.
But even return unauthenticated I can see in my logs that the request keeps runing
That is my server configs
memory_limit = 200M
max_execution_time = 600
max_input_time = 600
post_max_size = 200M
upload_max_filesize = 200M
display_errors = On
display_startup_errors = On
default_socket_timeout = 600
max_user_connections = 1000
UPDATE
weird thing, when I add an dump('test') in some part of my controller, it doesn't return the unauthenticated exception and at the final of the request returns the json of success
Are you sending any data in the request header ?
add to root .htaccess hope this will work.
RewriteCond %{HTTP:Authorization} .
RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}]
Related
I have a standard Apache 2 setup on an Ubuntu Server. I basically block almost everything from the outside world, but it is still open to friends and family for some basic stuff. Anyway, I frequently notice entries like these (generalizations for common sense) in my access logs:
157.245.70.127 - - [every-day] "GET /ab2g HTTP/1.1" 400 497 "-" "-"
157.245.70.127 - - [every-day] "GET /ab2h HTTP/1.1" 400 497 "-" "-"
xxx.xxx.xxx.xxx - - [sometimes] "POST /cgi-bin/.%2e/.%2e/.%2e/.%2e/bin/sh HTTP/1.1" 400 487 "-" "Mozilla/5.0 the rest or the user-agent..."
Nothing scary, but, I wanted to just force them to a 403. Here is a generalized excerpt from the config:
<Directory ~ "/var/www/.*">
Options Indexes FollowSymLinks
AllowOverride None
Require all granted
# A pure global block of anything not supported in my server
Include /etc/apache2/IPBlackList.conf
RewriteEngine on
RewriteCond %{REQUEST_URI} ^.*?(ab2[gh]|and other things).*$ [OR]
RewriteCond %{HTTP_USER_AGENT} (^-?$|and other things) [OR]
RewriteCond %{REQUEST_METHOD} CONNECT
RewriteRule "" "-" [F]
</Directory>
This works for every other case. That is, where ever you see "and other things", all those things result in a 403, regardless if the IP is blocked or not. This is the goal, but for some entries, I get pretty much a 400. Not bad, but, it's getting on my nerves.
I have expressly put the 157.245.70.127 in my IP block list. For all other IP's in the block list, it works just fine.
For the blank user agent, works virtually every time, but that one gets through every single time.
In other words... that 157 IP is getting through the IP block, the request URI block, and the blank user agent block.
The "cgi-bin" ones come from different IPs and have varying URIs, and, sometimes they get a 403, but other times not. Generally speaking, when I block the IP it works, but... why isn't the HTTP_POST not working in some cases?
What am I missing???
How can I resolve???
On my apache' servers, these "400" error responses are usually sent in response to HTTP/1.1 requests that have no HOST header. This error check precludes other processing, so REWRITE and SETENVIF have no effect.
FWIW - The only way I've found to intercept them is from the LOG output stream, by sending to an external program which looks for and processes 400 error marked packets. But they are already processed by that time.
When I am uploading large dimension photo (5625 * 5000 and size is 2MB) I got 500 error and if I reduce the dimension of this image then it uploads successfully..why is it happening? I am on GoDaddy server. please guide me through this.
My PHP.ini settings
max_execution_time = 1000
max_input_time = 1000
memory_limit = 500M
post_max_size = 1000M
upload_max_filesize = 1000M
Okay and I have troubleshooted the problem.. When I create a thumb it shows error, but when I remove that code it saves the image
My code to create thumb
anybody can explain why is it happening.. and how to get rid of it.
$img = Image::make($path.$name);
i have laravel application hosted on godaddy, i have multipe image upoader in form , when using small size images it work fine , but when add only one image of size 1 mb i got "Request Entity Too Large message" i have tied many solution like add php.ini file or even user.ini to public_html folder with this contents
file_uploads = On
upload_max_filesize = 256M
post_max_size = 257M
max_input_time = 300
max_execution_time = 300
max_file_uploads = 20
for both file but unfortunately still get the same error
I think you can't change the php.ini settings on GoDaddy shared hosting, but give this a try:
post_max_size xxM # change xxM to whatever you need
when filling a form in the software then if it takes more than 5 or 6 minutes it gives me an error like XML HTTP request response. Is there any way to increase the response time. The users of my software are very slow in writing.
please help me......
Place this at the top of your PHP script(php.ini file) and let your script loose!
ini_set('max_execution_time', 300); //300 seconds = 5 minutes
After this, restart your local server
Place beloved code in .htaccess to increase it,
<IfModule mod_php5.c>
php_value max_execution_time 300
</IfModule>
I am trying to use proxy_cache_use_stale error; to let the nginx serve a cached page when a target returns http status 500 internal error.
I have the following setup:
location /test {
proxy_cache maincache;
proxy_cache_valid 200 10s;
proxy_cache_use_stale error;
proxy_pass http://127.0.0.1:3000/test;
}
location /toggle {
proxy_pass http://127.0.0.1:3000/toggle;
}
Test will return either the current time and Http status 200 or the current time and http status 500. If i call /toggle the value returned from /test will switch from 200 to 500.
My expectation was that I should be able to send a call to /test and get the current time. I should then be able to send a call to /toggle and calls to /test would return the time when the function was first called. What is happening is that it keeps its last cache for 10 seconds and then sending back the current time and not using cache at all.
I understand that setting proxy_cache_valid 200 10s; will keep it from refreshing the cache when something other than 500 is returned and store new content in the cache when 10 seconds has passed and a none error message is
returned.
What i assumed after reading the documentation, old cache would not be automatically cleared until time passed equal to the inactive flag set for a cache. I have not set the inactive flag for cache so i expected the "proxy_cache_use_stale error" would prevent the cache from refreshing until either 10 minutes passed (default value when inactive is not defined), or errors are no longer returned. What part of the documentation have i misunderstood? How should this be done correctly?
Nginx documentation that i am refering to is the one found here.
http://nginx.org/en/docs/http/ngx_http_proxy_module.html?&_ga=1.112574977.446076600.1424025436#proxy_cache
you should use "http_500" instead of "error", see http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_next_upstream (proxy_cache_use_stale uses same arguments as proxy_next_upstream)