I have a standard Apache 2 setup on an Ubuntu Server. I basically block almost everything from the outside world, but it is still open to friends and family for some basic stuff. Anyway, I frequently notice entries like these (generalizations for common sense) in my access logs:
157.245.70.127 - - [every-day] "GET /ab2g HTTP/1.1" 400 497 "-" "-"
157.245.70.127 - - [every-day] "GET /ab2h HTTP/1.1" 400 497 "-" "-"
xxx.xxx.xxx.xxx - - [sometimes] "POST /cgi-bin/.%2e/.%2e/.%2e/.%2e/bin/sh HTTP/1.1" 400 487 "-" "Mozilla/5.0 the rest or the user-agent..."
Nothing scary, but, I wanted to just force them to a 403. Here is a generalized excerpt from the config:
<Directory ~ "/var/www/.*">
Options Indexes FollowSymLinks
AllowOverride None
Require all granted
# A pure global block of anything not supported in my server
Include /etc/apache2/IPBlackList.conf
RewriteEngine on
RewriteCond %{REQUEST_URI} ^.*?(ab2[gh]|and other things).*$ [OR]
RewriteCond %{HTTP_USER_AGENT} (^-?$|and other things) [OR]
RewriteCond %{REQUEST_METHOD} CONNECT
RewriteRule "" "-" [F]
</Directory>
This works for every other case. That is, where ever you see "and other things", all those things result in a 403, regardless if the IP is blocked or not. This is the goal, but for some entries, I get pretty much a 400. Not bad, but, it's getting on my nerves.
I have expressly put the 157.245.70.127 in my IP block list. For all other IP's in the block list, it works just fine.
For the blank user agent, works virtually every time, but that one gets through every single time.
In other words... that 157 IP is getting through the IP block, the request URI block, and the blank user agent block.
The "cgi-bin" ones come from different IPs and have varying URIs, and, sometimes they get a 403, but other times not. Generally speaking, when I block the IP it works, but... why isn't the HTTP_POST not working in some cases?
What am I missing???
How can I resolve???
On my apache' servers, these "400" error responses are usually sent in response to HTTP/1.1 requests that have no HOST header. This error check precludes other processing, so REWRITE and SETENVIF have no effect.
FWIW - The only way I've found to intercept them is from the LOG output stream, by sending to an external program which looks for and processes 400 error marked packets. But they are already processed by that time.
Related
I have an API that I can export multiple data with relations, but I just buy a new server, and in this server when I make request with a big json it reachs the route file twice and returns Unauthenticated (it should'nt cause its just one request and I don't make any redirects), with a smaller json it doesn't happens.
But even return unauthenticated I can see in my logs that the request keeps runing
That is my server configs
memory_limit = 200M
max_execution_time = 600
max_input_time = 600
post_max_size = 200M
upload_max_filesize = 200M
display_errors = On
display_startup_errors = On
default_socket_timeout = 600
max_user_connections = 1000
UPDATE
weird thing, when I add an dump('test') in some part of my controller, it doesn't return the unauthenticated exception and at the final of the request returns the json of success
Are you sending any data in the request header ?
add to root .htaccess hope this will work.
RewriteCond %{HTTP:Authorization} .
RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}]
The dev team came to me and the Senior Sys Admin and stated that a 400 error is popping up when it needs to be a 404.
I ran an infinite loop to see the output with:
while :; do wget http://<URL here>/-`date +%s`; sleep 1; done
It just appends an incremented Unix timestamp since I know it should be a 404. The output shows 404 every 10-15 iterations, then shows a single 400, then repeats the cycle.
We have tried to edit the ErrorDocument directive to point to a custom 404 error document to no avail.
What could be causing this 400 error to pop up every few requests?
We are using Apache 2.4 and we are trying to configure the MaxRequestWorker and ThreadLimit for Event MPM. Below is the configuration I have in apache's httpd.conf. But the configuration doesn't seem to take any effect. It still continues to use default values of (400 MaxRequestWorker and 25 Threads). Not sure if I am missing anything in my configuration.
I want to configure my server to use 1024 MaxRequestWorker and 64 ThreadsPerChild.
We have roughly 2Gig RAM and 2Gig in SWAP, Apache 2.4 (EVENT MPM) and Red Hat Linux OS.
Any help would really help. Thank you so much!!
Httpd.conf
------------
Event MPM
# StartServers: initial number of server processes to start
# MaxClients: maximum number of simultaneous client connections
# MinSpareThreads: minimum number of Event threads which are kept spare
# MaxSpareThreads: maximum number of Event threads which are kept spare
# ThreadsPerChild: constant number of Event threads in each server process
# MaxRequestsPerChild: maximum number of requests a server process serves
<IfModule event.c>
ServerLimit 16
StartServers 8
MaxRequestWorkers 1024
MinSpareThreads 75
MaxSpareThreads 250
ThreadsPerChild 64
ThreadLimit 64
MaxConnectionsPerChild 0
</IfModule>
I realise that this is an old post. Just in case anyone else comes across this again.
Check the exact module name. If you check /etc/httpd/conf.modules.d/00-mpm.conf (or equivalent location, this was on RHEL 7/CentOS 7) for the line that loads the events module:
LoadModule mpm_event_module
Copy this module name 'mpm_event_module'.
Rather than specifying this at the end of httpd.conf, it's better practice to create a file in /etc/httpd/conf.d/ called mpm_event.conf and load it there.
In this instance, I believe changing:
<IfModule event.c>
to
<IfModule mpm_event_module>
Then restarting HTTPD, would have fixed it.
Kind Regards,
Will
I am trying to use proxy_cache_use_stale error; to let the nginx serve a cached page when a target returns http status 500 internal error.
I have the following setup:
location /test {
proxy_cache maincache;
proxy_cache_valid 200 10s;
proxy_cache_use_stale error;
proxy_pass http://127.0.0.1:3000/test;
}
location /toggle {
proxy_pass http://127.0.0.1:3000/toggle;
}
Test will return either the current time and Http status 200 or the current time and http status 500. If i call /toggle the value returned from /test will switch from 200 to 500.
My expectation was that I should be able to send a call to /test and get the current time. I should then be able to send a call to /toggle and calls to /test would return the time when the function was first called. What is happening is that it keeps its last cache for 10 seconds and then sending back the current time and not using cache at all.
I understand that setting proxy_cache_valid 200 10s; will keep it from refreshing the cache when something other than 500 is returned and store new content in the cache when 10 seconds has passed and a none error message is
returned.
What i assumed after reading the documentation, old cache would not be automatically cleared until time passed equal to the inactive flag set for a cache. I have not set the inactive flag for cache so i expected the "proxy_cache_use_stale error" would prevent the cache from refreshing until either 10 minutes passed (default value when inactive is not defined), or errors are no longer returned. What part of the documentation have i misunderstood? How should this be done correctly?
Nginx documentation that i am refering to is the one found here.
http://nginx.org/en/docs/http/ngx_http_proxy_module.html?&_ga=1.112574977.446076600.1424025436#proxy_cache
you should use "http_500" instead of "error", see http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_next_upstream (proxy_cache_use_stale uses same arguments as proxy_next_upstream)
We want to decrease the load in one of our web servers and we are running some tests with squid configured as a reverse proxy.
The configuration is in the remarks below:
http_port 80 accel defaultsite=original.server.com
cache_peer original.server.com parent 80 0 no-query originserver name=myAccel
acl our_sites dstdomain .contentpilot.net
http_access allow our_sites
cache_peer_access myAccel allow our_sites
cache_peer_access myAccel deny all
The situation we are having is that pretty much the server is returning TCP_MISS almost all the time.
1238022316.988 86 69.15.30.186 TCP_MISS/200 797 GET http://original.server.com/templates/site/images/topnav_givingback.gif - FIRST_UP_PARENT/myAccel -
1238022317.016 76 69.15.30.186 TCP_MISS/200 706 GET http://original.server.com/templates/site/images/topnav_diversity.gif - FIRST_UP_PARENT/myAccel -
1238022317.158 75 69.15.30.186 TCP_MISS/200 570 GET http://original.server.com/templates/site/images/topnav_careers.gif - FIRST_UP_PARENT/myAccel -
1238022317.344 75 69.15.30.186 TCP_MISS/200 2981 GET http://original.server.com/templates/site/js/home-search-personalization.js - FIRST_UP_PARENT/myAccel -
1238022317.414 85 69.15.30.186 TCP_MISS/200 400 GET http://original.server.com/templates/site/images/submenu_arrow.gif - FIRST_UP_PARENT/myAccel -
1238022317.807 75 69.15.30.186 TCP_MISS/200 2680 GET http://original.server.com/templates/site/js/homeMakeURL.js - FIRST_UP_PARENT/myAccel -
1238022318.666 1401 69.15.30.186 TCP_MISS/200 103167 GET http://original.server.com/portalresource/lookup/wosid/intelliun-2201-301/image2.jpg - FIRST_UP_PARENT/myAccel image/pjpeg
1238022319.057 1938 69.15.30.186 TCP_MISS/200 108021 GET http://original.server.com/portalresource/lookup/wosid/intelliun-2201-301/image1.jpg - FIRST_UP_PARENT/myAccel image/pjpeg
1238022319.367 83 69.15.30.186 TCP_MISS/200 870 GET http://original.server.com/templates/site/images/home_dots.gif - FIRST_UP_PARENT/myAccel -
1238022319.367 80 69.15.30.186 TCP_MISS/200 5052 GET http://original.server.com/templates/site/images/home_search.jpg - FIRST_UP_PARENT/myAccel -
1238022319.368 88 69.15.30.186 TCP_MISS/200 5144 GET http://original.server.com/templates/site/images/home_continue.jpg - FIRST_UP_PARENT/myAccel -
1238022319.368 76 69.15.30.186 TCP_MISS/200 412 GET http://original.server.com/templates/site/js/showFooterBar.js - FIRST_UP_PARENT/myAccel -
1238022319.377 100 69.15.30.186 TCP_MISS/200 399 GET http://original.server.com/templates/site/images/home_arrow.gif - FIRST_UP_PARENT/myAccel -
We already tried removing all the cache memory. Any ideas. Could it be that my web site is marking some of the content different each time even though it has not change since the last time it was requested by the proxy?
What headers is the origin server (web server) sending back with your content? In order to be cacheable by squid, I believe you generally have to specify either a Last-Modified or ETag in the response header. Web servers will typically do this automatically for static content, but if your content is being dynamically served (even if from a static source) then you have to ensure they are there, and handle request headers such as If-Modified-Since and If-None-Match.
Also, since I got pointed to this question by your subsequent question about sessions--- is there a "Vary" header coming out in the response? For example, "Vary: Cookie" tells caches that the content can vary according to the Cookie header in the request: so static content wants to have that removed. But your web server might be adding that to all requests if there is a session, regardless of the static/dynamic nature of the data being served.
In my experience, some experimentation with the HTTP headers to see what the effects are on caching is of great benefit: I remember finding that the solutions were not always obvious.
Examine the headers returned with wireshark or firebug in firefox (the latter is easier to prod around but the former will give you more low-level information if you end up needing that).
Look for these items in the Response Headers (click on an item in the `Net' view to expand it and see request and response headers):
Last-Modified date -> if not set to a sensible time in the past then it won't be cached
Etags -> if these change every time the same item is requested then it will be re-fetched
Cache-Control -> Requests from the client with max-age=0 will (I believe) request a fresh copy of the page each time
(edit) Expires header -> If this is set in the past (i.e. always expired) then squid will not cache it
As suggested by araqnid, the HTTP headers can make a huge difference to what the proxy will think it can cache. If your back-end is using apache then test that static files served without going via any PHP or other application layer are cacheable.
Also, check that the squid settings for maximum_object_size and minimum_object_size are set to sensible values (the defaults are 4Mb and 0kb, which should be fine), and maximum cache item ages are also set sensibly.
(See http://www.visolve.com/squid/squid30/cachesize.php#maximum_object_size for this and other settings)