I'am using squid to route traffic via the closest parent.
My problem is when the RTT start to increase, all requests go tought the others parents, from that, RTT on the farest parent will be never updated even if connection back to normal.
Any solution ?
My configuration is :
cache_peer 192.168.1.10 parent 3128 3130 no-digest proxy-only round-robin-weighted
cache_peer 192.168.1.11 parent 3128 3130 no-digest proxy-only round-robin-weighted
netdb_ping_period 1 seconds
query_icmp on
never_direct allow all
Related
What I have discovered is that there are a number of ports on my Windows 10 box which (1) are not in use by any process and (2) I cannot listen on.
I discovered this problem trying to run a node server which used port 3000. I found a number of questions on this topic. This one is typical: Node.js Port 3000 already in use but it actually isn't?
All the respondents of this question and similar questions all suggest using "netstat -ano" to find the process which is using the port and killing it.
What I have found is that there are large number of ports blocked which are not tied to processes. This is not related to AV or firewall. I turned off the firewall and I have only Windows Defender AV.
I wrote a program to listen on the ports between 3000 and 5000 inclusive on 127.0.0.1.
int port = 3000;
while(port <= 5001)
{
try
{
ListenOnPort(port);
++port;
}
catch (Exception ex)
{
Console.WriteLine($"Listen on {port} failed: {ex.Message}");
++port;
}
}
Where ListenOnPort is...
private static void ListenOnPort(int v)
{
var uri = new UriBuilder("http", "127.0.0.1", v);
HttpListener listener = new HttpListener();
listener.Prefixes.Add(uri.Uri.ToString());
Console.WriteLine($"Listening on {v}");
listener.TimeoutManager.IdleConnection = new TimeSpan(0, 0, 1);
listener.Start();
var task = listener.GetContextAsync();
if(task.Wait(new TimeSpan(0,0,1)))
{
HttpListenerResponse response = task.Result.Response;
// Construct a response.
string responseString = "<HTML><BODY> Hello world!</BODY></HTML>";
byte[] buffer = System.Text.Encoding.UTF8.GetBytes(responseString);
// Get a response stream and write the response to it.
response.ContentLength64 = buffer.Length;
System.IO.Stream output = response.OutputStream;
output.Write(buffer, 0, buffer.Length);
// You must close the output stream.
output.Close();
}
listener.Stop();
}
The program produced output similar to this...
Listening on 3000
Listen on 3000 failed: The process cannot access the file because it is being used by another process
Listening on 3001
Listen on 3001 failed: The process cannot access the file because it is being used by another process
Listening on 3002
Listen on 3002 failed: The process cannot access the file because it is being used by another process
Listening on 3003
Listen on 3003 failed: The process cannot access the file because it is being used by another process
Listening on 3004
Listen on 3004 failed: The process cannot access the file because it is being used by another process
Listening on 3005
Listen on 3005 failed: The process cannot access the file because it is being used by another process
Listening on 3006
Listening on 3007
Listening on 3008
Listening on 3009
Listening on 3010
What I discovered is that between the ranges of 3000 and 5000, there are 624 ports which are blocked. Meanwhile "netstat -ano" shows that there are exactly 5 ports in use in that range. So what is blocking the 619 other ports?
Right...
While looking for something else I found the answer (at least the source of the problem). The reason I cannot connect to these ports is because they are all part of excluded port ranges on windows. To see the excluded ports use...
$ netsh int ipv4 show excludedportrange tcp
And there, magically, is a list of all the ports I cannot connect to. These excluded port ranges apparently originate from HyperV and Docker, both of which I have installed. There is apparently a way to get the ports back...not easy since it involves uninstalling Docker and HyperV, then reserving the port ranges for yourself and then reinstalling HyperV and Docker. Not worth it. Now that I simply know how to find the ports I cannot use, I will simply not use them!
Set the Windows "Dynamic Port Range" in a non conflicting place
We managed to contain this problem for the case where you can not change your port numbers (like a non configurable application).
When you issue the command:
netsh int ip show excludedportrange protocol=tcp
You get an output with a list of port ranges reserved:
Protocol tcp Port Exclusion Ranges
Start Port End Port
---------- --------
33474 33573
50000 50059 *
58159 58258
58259 58358
58359 58458
58459 58558
58559 58658
58659 58758
58759 58858
* - Administered port exclusions.
The most likely reason for this is Windows Hyper-V (Microsoft's hardware virtualization product) that reserves random port ranges (usually blocks of 100 ports).
This becomes a pain, because if you are developing an application or larger solution that uses multiple ports, some times you get a conflict and some times not after rebooting your system.
To find the "Dynamic Port Range" you can issue the command:
netsh int ipv4 show dynamicport tcp
The answer:
Protocol tcp Dynamic Port Range
---------------------------------
Start Port : 1024
Number of Ports : 64511
You can instruct Windows to modify this range out of the conflicting area. Let's say your development is under and up to port 60000, you can issue the following command to restrict the dynamic port range out of it (you must have administrator privilegs):
netsh int ipv4 set dynamic tcp start=60001 num=5534
To make Hyper-V (and Windows in general) use this new dynamic range you have to reboot your system.
Now if we request the excluded port range:
netsh int ip show excludedportrange protocol=tcp
The response has changed:
Protocol tcp Port Exclusion Ranges
Start Port End Port
---------- --------
50000 50059 *
63904 64003
64004 64103
64105 64204
64205 64304
64305 64404
64405 64504
64505 64604
64605 64704
* - Administered port exclusions.
Only the "Administered port exclusions" remains below port 60001
I use zmq PUB-SUB pattern to notice workers come, after running several days, LOST TCP CONNECTION on pub side last night.
I create ONE PUB on the server and have 230 SUB client.
among them, 90 SUB clients have a slow receive issue because of high CPU work after receiving pub messages. PUB lost TCP connections for these 90 subs.
pyzmq:17.0.0
python:2.7.5
In my program design, the slow SUB should be normal because of worker handle is slow, and HWM should protect PUB-SUB pattern. any suggestion?
[root#localhost apolo]# netstat -an|grep "127.0.0.1:5000 ESTABLISHED"|wc -l
230
[root#localhost apolo]# netstat -an|grep "0 127.0.0.1:5000"|wc -l
141
PUB code
zmq_publish = context.socket(zmq.PUB)
zmq_publish.bind("tcp://127.0.0.1:5000")
SUB code
zmq_subscripe = context.socket(zmq.SUB)
zmq_subscripe.connect("tcp://127.0.0.1:5000")
I am using HAProxy and I have been trying to set it up to work a certain way.
I want it so that if server 11.111.11.110 connects then it will always hit ABC_server01 unless that server is offline.
However this is how I have it currently written using weights:
acl the_workstation src 11.111.11.110
use_backend ABC if the_workstation
backend ABC
server ABC_server01 22.222.22.220:443 weight 255 maxconn 512 check
server ABC_server02 33.333.33.333:443 weight 1 maxconn 512 check
server ABC_server03 44.444.44.444:443 weight 1 maxconn 512 check
With what is written up top I believe that in 257 connection attempts 2 will not use ABC_server01.
I looked into if loops and timeouts however I was not able to come up with a working solution.
https://www.haproxy.org/coding-style.html
http://www.haproxy.org/download/1.5/doc/configuration.txt
Does anyone know a simple way to make it prioritize connection to a server then use the other remaining servers if the connection fais?
This is the current version of HA Proxy I am using "HA-Proxy version 1.5.18 2016/05/10"
We found the solution we altered the code to look like this:
acl the_workstation src 11.111.11.110
use_backend ABC if the_workstation
backend ABC
server ABC_server01 22.222.22.220:443 weight 255 maxconn 512 check
server ABC_server02 33.333.33.333:443 weight 1 maxconn 512 check backup
server ABC_server03 44.444.44.444:443 weight 1 maxconn 512 check backup
By adding backup it will only hit those servers if the first is offline.
I would like to use five TCP and five UDP streams (sending and receiving) in parallel on a single host, where the UDP traffic consists of video, and the TCP traffic is arbitrary. How do I use both transport layers in parallel on the same node?
In INET 3.0.0 there is an example nclients in examples\inet directory. It could be a good start point to prepare your model.
As long as the TCP and UDP traffic is independent, you can easily install several UDP and TCP applications at the same time in the same host. Something like this:
**.cli[*].numTcpApps = 2
**.cli[*].tcpApp[0].typename = "TelnetApp"
**.cli[*].tcpApp[1].typename = "TCPBasicClientApp"
**.cli[*].numUdpApps = 2
**.cli[*].udpApp[0].typename = "UDPVideoStreamSvr"
**.cli[*].udpApp[1].typename = "UDPVideoStreamSvr"
// ... further cofiguration of the applications
I'm trying to learn the ropes on packet queuing, so I thought I'd set up a limitation on traffic coming into port 80 from known Tor Exit nodes. This is on FreeBSD 9, so OpenBSD-specific solutions might not apply (syntax/etc).
# Snipped to mainly the relevant parts
table <torlist> persist file "/var/db/torlist"
# ...
set block-policy return
scrub in all
scrub out on $ext_if all
# What I *want* to do is create a cue for known tor exit nodes
# no single one IP should be able to do more than 56k/sec
# but the combined bandwidth of all tor visitors should not
# exceed 512k/sec, basically limiting Tor visitors to something
# like dialup
altq on $ext_if cbq bandwidth 512k queue { qin-tor }
queue qin-tor bandwidth 56Kb cbq ( default rio )
# ...
block in log all
antispoof for { $ext_if, $tun_if }
antispoof quick for $int_if inet
### inbound web rules
# Main Jail ($IP4_PUB3 is my webserver IP)
pass in on $ext_if inet proto tcp from <torlist> to $IP4_PUB3 port www synproxy state queue qin-tor
pass in on $ext_if inet proto tcp to $IP4_PUB3 port www synproxy state
The problem is, when the altq, queue, and pass line specific for torlist are enabled, all connections are extremely slow. I've even tested my own IP against pfctl -t torlist -T test , and got back "0/1 addresses match", and if I test one from the list it's "1/1 addresses match"
So I'm not really educated in the matter of what exactly I'm doing wrong, I was assuming the pass in line with in it would only be applied to the IPs listed in that table, as such my own IP wouldn't validate on that rule and would pass onto the next one.
Getting it working isn't urgent, but any help in understanding where I'm failing would be greatly appreciated.
Turns out that I didn't quite understand how altq works. When I created a queue on my external interface with only one queue I created a default for all connections. As a result I had to define my top speed plus create a default queue for everything else.
For example if my system has 100Mb top
altq on $ext_if cbq bandwidth 100Mb queue { qin-www, qin-tor }
queue qin-www bandwidth 98Mb priority 1 cbq ( default borrow )
queue qin-tor bandwidth 56Kb priority 7 cbq ( rio )
...
pass in on $ext_if inet proto tcp to $IP4_PUB3 port www synproxy state
pass in on $ext_if inet proto tcp from <torlist> to $IP4_PUB3 port www synproxy state queue qin-tor
(doesn't need to be on top since pf parses all the rules unless you use 'quick')
In this way only those IPs matching in gets throttled down to the qin-tor queue, everything else not defined defaults to the qin-www queue.
The FAQ on OpenBSD's pf didn't seem to make this clear to me until I thought about why there would be an error for a "default", then I figured maybe it applies to the whole interface, so need to define a default for rules not marked to a specific queue.
So there it is... the solution to my 'simple' problem. Hopefully anyone else who has this problem comes accross this.
This is the FAQ I was going by for packet queueing: http://www.openbsd.org/faq/pf/queueing.html