Why won't the socket open in bulk at once? - macos

OSX socket programming
Why won't the socket open in bulk at once?
I am using Intel macOS Big Sur 11.5.1
The connection test is being conducted with the local docker nginx server.
We are using Golang to conduct tests with the following codes:
func TestBulkConnection(t *testing.T) {
var worker = 1000
var wg sync.WaitGroup
for i := 0; i < worker; i++ {
wg.Add(1)
//time.Sleep(time.Millisecond * 10)
go func(id int) {
conn, err := net.Dial("tcp", "localhost:9000")
if err != nil {
log.Fatal(err)
}
defer conn.Close()
defer wg.Done()
fmt.Println("waiting... ", id)
time.Sleep(time.Second * 30)
}(i)
}
wg.Wait()
}
1000 goroutine connecting to nginx only.
After the connection, the sleep() function was used to make sure that nothing was done.
The client created 1000 goroutines, but found that only 200 to 300 nginx and connections worked and the rest did not (we confirmed with netstat-anv | grep 9000).
When connecting, it was confirmed that all connections are well established when the sleep() function is executed.
With nginx and client code, when spun from private ubuntu 18.04, the connection was confirmed at once.
I think it's a problem on the nginx server side, but I don't know the cause of the problem.
Is there a difference between Mac and Ubuntu in this test?
Added
let net = require('net');
for (let i = 0; i < 1000; i++) {
const socket = net.connect({ port: 9000 });
socket.on('connect', function () {
console.log('connected to server!');
});
}
netstat -anv | grep 9000 | wc -l
2000 connection ok
Added
The following links are used to increase the file descriptors of OSX.
https://wilsonmar.github.io/maximum-limits/
In recovery mode, 'csrutil disable' was also executed.
$ ulimit -a
-t: cpu time (seconds) unlimited
-f: file size (blocks) unlimited
-d: data seg size (kbytes) unlimited
-s: stack size (kbytes) 8192
-c: core file size (blocks) 0
-v: address space (kbytes) unlimited
-l: locked-in-memory size (kbytes) unlimited
-u: processes 2048
-n: file descriptors 524288
But still.
$ netstat -anv | grep 9000 | wc -l
287

Every unix based OS has limits on the number of file descriptors a process can open. Also, every socket consumes a file descriptor, just like opening a file on disk or your stdin, stdout & stderr.
MacOS by default sets the limit of file descriptors to 256 per process. Therefore your statement that your Go process stops at around 200-300 connections sounds right. In theory it should stop being able to open sockets after 253 connections (3 file descriptors already assigned to stdin, stdout & stderr).
Ubuntu on the other hand sets the default limit to 1024. You will still have this issue on Ubuntu but it will be able to open more sockets before you hit the wall.
On both systems you can check this limit by running the following command:
ulimit -n
Note: you can run ulimit -a to see all the limits.
On MacOS you can change this limit temporarily system-wide (it will reset after reboot) with the following command:
sudo launchctl limit maxfiles 1024 1024
On Ubuntu you can change this limit in your current shell with the following command:
ulimit -n 1024
This should let your 1000 connections to succeed. Note that the number does not have to be powers of 2. You can pass 1500 for example. Just remember that your process use file descriptors for things other than sockets.

Related

TCP Listener is not shut down completely

I have a TCP Listener that initialized as next:
myListener := net.Listen("tcp", addr)
Then am able to receive connections and process them. Then I need to close the server in order that I can reuse the same port but this is not happening, this is how am closing the tcp server:
myListener.Close()
In the client side am closing all the existent TCP connections to that server and then from the terminal I see that those connections are being close but the port is still in use by the server and listening (even when is not accepting new connections which is right according to documentation). This is how I check in the terminal:
netstat -an | grep 8080
And after close the client side connections I get this and cannot reuse the port:
tcp46 0 0 *.8080 *.* LISTEN
After doing myListener.Close() I waited some time but in the terminal the port is still in use.
In addition to checking the error from the net.Listener as stated in https://stackoverflow.com/a/65638937/1435495
You will also want to add a defer to your myListener.Close() will help ensure that the close does actually execute even if something would cause the app to exit prematurely.
defer myListener.Close()
The net.Listen function returns two parameters (Listener, error), in your example above you appear to only be capturing the Listener and not the error.
Assuming you're actually capturing it, you should check if the error is empty before you begin using the listener.
package main
import "net"
func main() {
myListener, err := net.Listen("tcp", ":8080")
if err != nil {
panic(err)
}
myListener.Close()
}
Something similar to the snippet above should work. Now if you're not getting an error (I presume you will get an error), the problem you likely have is that something else is already using that port.
Try running your netstat as root so you can see all processes which will give you a better idea of what is holding on to that port.
sudo netstat -apn | grep -i listen | grep 8080

Error during go build/run execution

I've created a simple go script: https://gist.github.com/kbl/86ed3b2112eb80522949f0ce574a04e3
It's fetching some xml from the internet and then starts X goroutines. The X depends on file content. In my case it was 1700 goroutines.
My first execution finished with:
$ go run mathandel1.go
2018/01/27 14:19:37 Get https://www.boardgamegeek.com/xmlapi/boardgame/162152?pricehistory=1&stats=1: dial tcp 72.233.16.130:443: socket: too many open files
2018/01/27 14:19:37 Get https://www.boardgamegeek.com/xmlapi/boardgame/148517?pricehistory=1&stats=1: dial tcp 72.233.16.130:443: socket: too many open files
exit status 1
I've tried to increase ulimit to 2048.
Now I'm getting different error, script is the same thou:
$ go build mathandel1.go
# command-line-arguments
/usr/local/go/pkg/tool/linux_amd64/link: flushing $WORK/command-line-arguments/_obj/exe/a.out: write $WORK/command-line-arguments/_obj/exe/a.out: file too large
What is causing that error? How can I fix that?
You ran ulimit 2048 which changed the maximum file size.
From man bash(1), ulimit section:
If no option is given, then -f is assumed.
This means that you now set the maximum file size to 2048 bytes, that's probably not enough for.... anything.
I'm guessing you meant to change the limit for number of open file descriptors. For this, you want to run:
ulimit -n 2048
As for the original error (before changing the maximum file size), you're launching 1700 goroutines, each performing a http get. Each creates a connection, using a tcp socket. These are covered by the open file descriptor limit.
Instead, you should be limiting the number of concurrent downloads. This can be done with a simple worker pool pattern.

go-ping library for unprivileged ICMP ping in golang

I have been using go-ping library for the unprivileged ping and calculate various statistics of network in golang.
code snippet is as->
func (p *Ping) doPing() (latency, jitter, packetLoss float64, err error) {
timeout := time.Second*1000
interval := time.Second
count := 5
host := p.ipAddr
pinger, cmdErr := ping.NewPinger(host)
if cmdErr != nil {
glog.Error("Failed to ping " + p.ipAddr)
err = cmdErr
return
}
pinger.Count = count
pinger.Interval = interval
pinger.Timeout = timeout
pinger.SetPrivileged(false)
pinger.Run()
stats := pinger.Statistics()
latency = float64(stats.AvgRtt)
jitter = float64(stats.StdDevRtt)
packetLoss = stats.PacketLoss
return
}
It was working fine but now it has started throwing :-
"Error listening for ICMP packets: socket: permission denied" error.
Anyone knows the reason behind this? Go version I am using is go1.7.4.
This is in the README.md of the library you're using :
This library attempts to send an "unprivileged" ping via UDP. On linux, this must be enabled by setting
sudo sysctl -w net.ipv4.ping_group_range="0 2147483647"
If you do not wish to do this, you can set pinger.SetPrivileged(true) and use setcap to allow your binary using go-ping to bind to raw sockets (or just run as super-user):
setcap cap_net_raw=+ep /bin/goping-binary
See this blog and the Go icmp library for more details.
Hope it helps !
Make sure your setting haven't changed in any way. Using ping from the package still works for me on a 32-bit Ubuntu 16.04 with Go 1.7.4 (linux/386) if I previousely set the net.ipv4.ping_group_range according to the instructions on Github.
Note on Linux Support:
This library attempts to send an "unprivileged" ping via UDP. On linux, this must be enabled by setting
sudo sysctl -w net.ipv4.ping_group_range="0 2147483647"
If you do not wish to do this, you can set pinger.SetPrivileged(true) and use setcap to allow your binary
using go-ping to bind to raw sockets (or just run as super-user):
setcap cap_net_raw=+ep /bin/goping-binary
See this blog
and the Go icmp library for
more details.

Read fails after tcpreplay with error: 0: Resource temporarily unavailabl

I have a very simple script to run. It calls tcpreplay and then ask the user to type in something. Then the read will fail with read: read error: 0: Resource temporarily unavailable.
Here is the code
#!/bin/bash
tcpreplay -ieth4 SMTP.pcap
echo TEST
read HANDLE
echo $HANDLE
And the output is
[root#vse1 quick_test]# ./test.sh
sending out eth4
processing file: SMTP.pcap
Actual: 28 packets (4380 bytes) sent in 0.53 seconds. Rated: 8264.2 bps, 0.06 Mbps, 52.83 pps
Statistics for network device: eth4
Attempted packets: 28
Successful packets: 28
Failed packets: 0
Retried packets (ENOBUFS): 0
Retried packets (EAGAIN): 0
TEST
./test.sh: line 6: read: read error: 0: Resource temporarily unavailable
[root#vse1 quick_test]#
I am wondering if I need to close or clear up any handles or pipes after I run tcpreplay?
Apparently tcpreplay sets O_NONBLOCK on stdin and then doesn't remove it. I'd say it's a bug in tcpreplay. To work it around you can run tcpreplay with stdin redirected from /dev/null. Like this:
tcpreplay -i eth4 SMTP.pcap </dev/null
Addition: note that this tcpreplay behavior breaks non-interactive shells only.
Another addition: alternatively, if you really need tcpreplay to receive your
input you can write a short program which resets O_NONBLOCK. Like this one
(reset-nonblock.c):
#include <stdio.h>
#include <unistd.h>
#include <fcntl.h>
int
main()
{
if (fcntl(STDIN_FILENO, F_SETFL,
fcntl(STDIN_FILENO, F_GETFL) & ~O_NONBLOCK) < 0) {
perror(NULL);
return 1;
}
return 0;
}
Make it with "make reset-nonblock", then put it in your PATH and use like this:
tcpreplay -i eth4 SMTP.pcap
reset-nonblock
While the C solution works, you can turn off nonblocking input in one line from the command-line using Python. Personally, I alias it to "setblocking" since it is fairly handy.
$ python3 -c $'import os\nos.set_blocking(0, True)'
You can also have Python print the previous state so that it may be changed only temporarily:
$ o=$(python3 -c $'import os\nprint(os.get_blocking(0))\nos.set_blocking(0, True)')
$ somecommandthatreadsstdin
$ python3 -c $'import os\nos.set_blocking(0, '$o')'
Resource temporarily unavailable is EAGAIN (or EWOULDBLOCK) which is the error code for nonblocking file descriptor when no further data is available (would block if wasn't in nonblocking mode). The previous command (tcpreplay in this case) erroneously left STDIN in nonblocking mode. The shell will not correct it, and the following process isn't meant to work with non- default nonblocking STDIN.
In your script, you can also turn off nonblocking with:
perl -MFcntl -e 'fcntl STDIN, F_SETFL, fcntl(STDIN, F_GETFL, 0) & ~O_NONBLOCK'

Why can I only open 2045 files with Tie::File on Windows?

I have the following code that tries to tie arrays to files. Except, when I run this code, it only creates 2045 files. What is the issue here?
#!/usr/bin/perl
use Tie::File;
for (my $i = 0; $i < 10000; $i++) {
#files{$i} = ();
tie #{$files{$i}}, 'Tie::File', "files//tiefile$i";
}
Edit: I am on windows
You are accumulating open file handles (see ulimit -n, setrlimit RLIMIT_NOFILE/RLIMIT_OFILE), and you ultimately hit a 2048 open file descriptors limit (2045 + stdin + stdout + stderr.)
Under Windows you will have to rewrite your application so that it has at most 2048 open file handles at any one time, since the 2048 limit is hard limit (cannot be modified) in MSVC's stdio.
On Linux machines go to /etc/security/limits.conf and add or modify these lines
* soft nofile 10003
* hard nofile 10003
This will increase the number of files each process can have open to 10003 (remember that you always start with three open: stdin, stdout, and stderr).
Based on you comments it sounds like you are using a Win32 machine. I can't find a way to increase the number of open files per process, but you might, and I stress might, be able to handle this through fork'ing (which is really threading on Win32).

Resources