I am running squid 3.1 on CentOS 6.6. Sometimes when a machine enters into the network it starts causing forwarding loops in squid and as a result it slows down the internet. Something like this keeps filling cache.log
2015/07/20 10:58:44| WARNING: Forwarding loop detected for:
GET / HTTP/1.1
Host: 10.0.5.50:3128
Via: 1.0 squid.mydominname.com (squid/3.1.10), 1.1 squid.mydominname.com (squid/3.1.10)
X-Forwarded-For: 10.0.5.143, 10.0.5.50
Cache-Control: max-age=259200
Connection: keep-alive
After sometime I also get file descriptor error like so;
client_side.cc(2994) okToAccept: WARNING! Your cache is running out of
filedescriptors
X-Forwarded-For identifies the vulnerable machine. It happened in past that vulnerable machine contains suspicious software that caused this problem.
This doesn't happen all time but only when any bad machine enters into the network. Is there any squid configuration to prevent the network from that kind of forwarding loops?
Related
I am not able to successfully bind and secure the rethinkdb http client, either being exposed to the whole network or refusing connections behind the proxy.
I am thus left with no choice but to restart the rdb daemon with bind-http=all each
time I want to access it...
Rdb starts with systemctl under archlinux. Three configurations I tried:
# /etc/rethinkdb/instances.d/mydb.conf
bind-http=localhost #(1)
bind-http=127.0.0.1 #(2)
bind-http=1.2.3.4 #(3)
Resulting in:
Fails to parse 'localhost'
Refuses connections behind the proxy
Equivalent to bind-http=all
Firefox 59 uses a socks proxy, working ok
as the browser's ip address does become 1.2.3.4:
$ ssh -TND 8080 user#1.2.3.4
I am quite convinced that I had secured the http client as expected,
and problems started after I updated both FF and rdb
(FF59 fails to parse 'localhost' as well for example)
I don't know if this is a bug or a feature or if I am missing something,
any help is most welcome. Many thanks
Beware of the "localhost" string.
Configuring the rethinkdb server with:
#/etc/rethinkdb/instances.d/mydb.conf
bind-http=127.0.0.1
http-port=8084
and binding some local port with SSH:
[client]$ ssh -L 8080:127.0.0.1:8084 server
is enough to access the web interface at 127.0.0.1:8080, as suggested by #jishi.
Configuring the browser to use a SOCKS proxy as per the rdb docs is not at all necessary.
For some reason localhost:8080 is not understood by FF59 (gets invisibly prefixed by www or something).
I'm unable to get my application utilizing web sockets to work.
I have a site www.example.com which uses an anti-DDoS service so it resolves to IP X.X.X.X. The real address of the server is Y.Y.Y.Y. The anti-DDoS service does not proxy web sockets traffic so I wanted to stream it directly to the real address (it's difficult to find it for attack in reality so this will work) so what I did is instead of pointing it to ws://www.example.com:100/, I pointed it to ws://Y.Y.Y.Y:100/.
Now if I access my application by the real IP (http://Y.Y.Y.Y), it connects to ws://Y.Y.Y.Y:100/ just fine but if I use http://www.example.com link (which resolves to X.X.X.X), ws://Y.Y.Y.Y:100/ won't connect saying "WebSocket connection to 'ws://Y.Y.Y.Y:100/' failed: Error in connection establishment: net::ERR_CONNECTION_REFUSED".
I guess this has something to do with security but I don't know what exactly. Please help.
Maybe the websocket server sees the domain in Origin HTTP header different than the domain in the Host HTTP header and it is refusing the connection because of that. The Origin header is usually used to figure out if the connection is coming from an allowed website, since websockets are not under the Same Origin Policy.
The request that works will look like this:
GET / HTTP/1.1
Host: YYYY
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Key: x3JJHMbDL1EzLkh9GBhXDw==
Sec-WebSocket-Version: 13
Origin: http://YYYY
The request that is refused will look like this:
GET / HTTP/1.1
Host: YYYY
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Key: x3JJHMbDL1EzLkh9GBhXDw==
Sec-WebSocket-Version: 13
Origin: http://www.example.com
It is hard to tell with this little information. Don't you get any error in the server logs?
I have set up a squid proxy on EC2, and I'm trying to use it from behind a corporate firewall. After configuring firefox to use my proxy, I tried to surf to yahoo.com. The browser seems to hang as if handling an extremely long running request. Checking the squid logs I see:
1431354246.891 11645 xxx.0.xx.xxx TCP_MISS/200 7150 CONNECT www.yahoo.com:443 username HIER_DIRECT/xx.xxx.XX.xx-
So far, I don't have a good explanation of most of these entries , but from http://wiki.squid-cache.org/SquidFaq/SquidLogs#access.log , I've found that:
MISS = The response object delivered was the network response object.
What does this mean? Is anything I can do to connect to the outside internet?
This has been asked a long time ago, but maybe someone can still use this...
This means you connected to squid and the request was made to yahoo using the TCP protocol that HTTP uses. Furthermore, the MISS means it's a cache miss, squid doesn't have this page stored.
The reason for the hanging might be caused by the response being caught somewhere along the line (corporate firewall, maybe? local firewall?) or even misconfiguration of the proxy.
For more, perhaps you should search on https://serverfault.com, for example this is a good starting point, then you can narrow down the problem: https://serverfault.com/questions/514716/whats-the-minimum-required-squid-config-to-make-a-public-proxy-server
Gradle 2.2 takes hours to build a project on a PC that takes 8 minutes on Linux. When run with –debug, on the slow machine, gradle reports no errors, but it stops and waits for approx. 2 minutes at every resource, after every User-Agent line:
18:39:15.819 [DEBUG] [org.apache.http.headers] >> User-Agent: Gradle/2.0 (Windows 7;6.1;amd64) (Oracle Corporation;1.7.0_67;24.65-b04)
<2 min. delay>
18:41:15.527 [DEBUG] [org.apache.http.impl.conn.DefaultClientConnection] Receiving response: HTTP/1.1 200 OK
18:41:15.527 [DEBUG] [org.apache.http.headers] << HTTP/1.1 200 OK
Linux workstations on the same subnet (behind the same firewall and using the same squid proxy) do not have this delay.
An Extended snip from Windows is here.
Snip from Linux build around same point in build.
This seems to have been a VERY STRANGE issue with a transparent http proxy and DansGuardian web filter. For still unknown reasons, this one PC’s http traffic got mangled.
This is odd, because our entire LAN’s http traffic to the internet is content filtered. There was a filtering exception that allowed any traffic from this slow pc to be unfiltered. But that had the opposite effect as expected. Gradle traffic became crazy slow on the ‘unfiltered’ PC, while content-filtered workstations had no problems. Even stranger, Gradle also ran at normal speed on unfiltered Linux workstations.
The workaround was to configure IPTables and the transparent proxy to completely ignore the slow pc's http traffic. So now it is unfiltered, and unproxied. It has been nicknamed the pornstation.
It happened to us as well, though in our case it was caused by the AntiVirus on the PC (Nod32 not to name it).
We had to completely disable the HTTP/web filters on it.
May not be your case, but may help others coming here for advice.
What is the keep-alive feature? How can I enable it?
Following is the output from the chrome's Page Speed plugin.
Enable Keep-Alive
The host {MYWEBSITE.COM} should enable Keep-Alive. It serves the following resources.
http://MYWEBSITE.com/
http://MYWEBSITE.com/fonts/AGENCYR.TTF
http://MYWEBSITE.com/images/big_mini/0002_mini.jpeg
http://MYWEBSITE.com/images/big_mini/0003_mini.jpeg
http://MYWEBSITE.com/images/big_mini/0004_mini.jpeg
http://MYWEBSITE.com/images/big_mini/0005_mini.jpeg
http://MYWEBSITE.com/images/big_mini/0006_mini.jpeg
http://MYWEBSITE.com/images/big_mini/0007_mini.jpeg
http://MYWEBSITE.com/images/.jpeg
http://MYWEBSITE.com/images/small/0002S.jpeg
http://MYWEBSITE.com/images/small/0003S.jpeg
http://MYWEBSITE.com/images/small/0004S.jpeg
http://MYWEBSITE.com/images/small/0005S.jpeg
http://MYWEBSITE.com/images/small/0006S.jpeg
http://MYWEBSITE.com/images/small/0007S.jpeg
http://MYWEBSITE.com/images/small/0008S.jpeg
http://MYWEBSITE.com/images/small/0009S.jpeg
http://MYWEBSITE.com/images/small/0010S.jpeg
http://MYWEBSITE.com/images/small/0011S.jpeg
http://MYWEBSITE.com/images/small/0012S.jpg
http://MYWEBSITE.com/images/small/0013S.jpeg
http://MYWEBSITE.com/images/small/0014S.jpeg
http://MYWEBSITE.com/images/small/0015S.jpeg
http://MYWEBSITE.com/images/small/0016S.jpeg
http://MYWEBSITE.com/images/small/0017S.jpeg
http://MYWEBSITE.com/images/small/0018S.jpeg
http://MYWEBSITE.com/images/small/0019S.jpeg
http://MYWEBSITE.com/yoxview/yoxview.css
http://MYWEBSITE.com/yoxview/images/empty.gif
http://MYWEBSITE.com/yoxview/images/left.png
http://MYWEBSITE.com/yoxview/images/popup_ajax_loader.gif
http://MYWEBSITE.com/yoxview/images/right.png
http://MYWEBSITE.com/yoxview/images/sprites.png
http://MYWEBSITE.com/yoxview/img3_mini.jpeg
http://MYWEBSITE.com/yoxview/jquery.yoxview-2.21.min.js
http://MYWEBSITE.com/yoxview/lang/en.js
http://MYWEBSITE.com/yoxview/yoxview-init.js
HTTP Keep-Alive (otherwise known as HTTP persistent connections) configures the HTTP server to hold open a connection so that it can be reused by the client to send multiple requests thus reducing the overhead of loading a page. Each server and environment are different, so setting it up depends on your environment.
In short: if you're using HTTP/1.0, when making the original request (assuming your server supports it) add a Connection: Keep-Alive header. If the server supports it, it will return the same header back to you. If you're using HTTP/1.1 and the server is configured properly, it will automatically use persistent connections.
Be aware that while Keep-Alive provides some benefit at low volumes, it performs poorly at high volumes for small and medium-size sites (for example, if your blog gets Slashdotted). This Hacker News thread has some good background information.
In other words, while many of the PageSpeed recommendations are great across the board, this one should be taken with a grain of salt.