CLI output : killing the process in the middle but no required Output - go

I am currently working on a project which requires me to send a curl request. The output of the request provides a link which expires in 10 seconds. The link starts a download. I am trying send the request and downloading the file from the link in a go program. But the link providing output takes 10 seconds to complete and after that I cannot access the link. So I decided to kill the process prematurely using timeout command and I am able to download the needful. But when I try the same thing in Go, I am not able to get the output which would be displayed until it got killed.
I am only getting this output.
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1320 0 1043 100 277 98 26 0:00:10 0:00:10 --:--:-- 12
I use the following go script.
cmd := exec.Command("chmod", "744", "end.sh")
out, err := cmd.Output()
cmd1 := exec.Command("./end.sh")
out, err = cmd1.CombinedOutput()
if err != nil {
log.Fatalf("cmd.Run(endpoint) failed with %s\n", err)
}
fmt.Println(string(out))
This code calls a shell script.
timeout 5 curl XXXXXXXXX
So what could I do to get the output, from shell script or go script modification?

Related

How to make squeue display time limits in hours only?

When viewing submitted jobs managed by Slurm, I would like to have the time limit column (specified by %l) to show only hours, instead of the usual days-hours:minutes:seconds format. This is the command I am currently using:
squeue --format="%.6i %.5P %.25j %.8u %.8T %.10M %.5l %.15b %.5C %.6D %R" --sort=+i --me
and this is the example output:
276350 qgpu jobname username RUNNING 1:14:14 1-00:00:00 gres:gpu:v100:1 18 1 s31n02
So, in this case, I would like the elapsed time to remain as is (1:14:14), but the time limit to change from 1-00:00:00 to 24. Is there a way to do it?
This is the way Slurm displays the dates. Elapsed time will eventually be displayed the same way (days-hours:minutes:seconds) after 23:59:59.
You can use a wrapper script to convert into a different format. Or if you know the time limit is no more than a day, just set the time limit to 23:59:00 by using --time=1439.
salloc -N1 --time=1439 bash
Using your squeue command:
166 mypartition interactive jyvet RUNNING 7:36 23:59:00 N/A 1 1 mynode

Why won't the socket open in bulk at once?

OSX socket programming
Why won't the socket open in bulk at once?
I am using Intel macOS Big Sur 11.5.1
The connection test is being conducted with the local docker nginx server.
We are using Golang to conduct tests with the following codes:
func TestBulkConnection(t *testing.T) {
var worker = 1000
var wg sync.WaitGroup
for i := 0; i < worker; i++ {
wg.Add(1)
//time.Sleep(time.Millisecond * 10)
go func(id int) {
conn, err := net.Dial("tcp", "localhost:9000")
if err != nil {
log.Fatal(err)
}
defer conn.Close()
defer wg.Done()
fmt.Println("waiting... ", id)
time.Sleep(time.Second * 30)
}(i)
}
wg.Wait()
}
1000 goroutine connecting to nginx only.
After the connection, the sleep() function was used to make sure that nothing was done.
The client created 1000 goroutines, but found that only 200 to 300 nginx and connections worked and the rest did not (we confirmed with netstat-anv | grep 9000).
When connecting, it was confirmed that all connections are well established when the sleep() function is executed.
With nginx and client code, when spun from private ubuntu 18.04, the connection was confirmed at once.
I think it's a problem on the nginx server side, but I don't know the cause of the problem.
Is there a difference between Mac and Ubuntu in this test?
Added
let net = require('net');
for (let i = 0; i < 1000; i++) {
const socket = net.connect({ port: 9000 });
socket.on('connect', function () {
console.log('connected to server!');
});
}
netstat -anv | grep 9000 | wc -l
2000 connection ok
Added
The following links are used to increase the file descriptors of OSX.
https://wilsonmar.github.io/maximum-limits/
In recovery mode, 'csrutil disable' was also executed.
$ ulimit -a
-t: cpu time (seconds) unlimited
-f: file size (blocks) unlimited
-d: data seg size (kbytes) unlimited
-s: stack size (kbytes) 8192
-c: core file size (blocks) 0
-v: address space (kbytes) unlimited
-l: locked-in-memory size (kbytes) unlimited
-u: processes 2048
-n: file descriptors 524288
But still.
$ netstat -anv | grep 9000 | wc -l
287
Every unix based OS has limits on the number of file descriptors a process can open. Also, every socket consumes a file descriptor, just like opening a file on disk or your stdin, stdout & stderr.
MacOS by default sets the limit of file descriptors to 256 per process. Therefore your statement that your Go process stops at around 200-300 connections sounds right. In theory it should stop being able to open sockets after 253 connections (3 file descriptors already assigned to stdin, stdout & stderr).
Ubuntu on the other hand sets the default limit to 1024. You will still have this issue on Ubuntu but it will be able to open more sockets before you hit the wall.
On both systems you can check this limit by running the following command:
ulimit -n
Note: you can run ulimit -a to see all the limits.
On MacOS you can change this limit temporarily system-wide (it will reset after reboot) with the following command:
sudo launchctl limit maxfiles 1024 1024
On Ubuntu you can change this limit in your current shell with the following command:
ulimit -n 1024
This should let your 1000 connections to succeed. Note that the number does not have to be powers of 2. You can pass 1500 for example. Just remember that your process use file descriptors for things other than sockets.

Bash script - check how many times public IP changes

I am trying to create my first bash script. The goal of this script is to check at what rate my public IP changes. It is a fairly straight forward script. First it checks if the new address is different from the old one. If so then it should update the old one to the new one and print out the date along with the new IP address.
At this point I have created a simple script in order to accomplish this. But I have two main problems.
First the script keeps on printing out the IP even tough it hasn't changed and I have updated the PREV_IP with the CUR_IP.
My second problem is that I want the output to direct to a file instead of outputting it into the terminal.
The interval is currently set to 1 second for test purposes. This will change to a higher interval in the final product.
#!/bin/bash
while true
PREV_IP=00
do
CUR_IP=$(curl https://ipinfo.io/ip)
if [ $PREV_IP != "$CUR_IP" ]; then
PREV_IP=$CUR_IP
"$(date)"
echo "$CUR_IP"
sleep 1
fi
done
I also get a really weird output. I have edited my public IP to xx.xxx.xxx.xxx:
Sat 20 Mar 09:45:29 CET 2021
xx.xxx.xxx.xxx
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:--
while true
PREV_IP=00
do
is the reason you are seeing ip each loop. It's the same as while true; PREV_IP=00; do. The exit status of true; PREV_IP=00 is the exit status of last command - the exit status of assignment is 0 (success) - so the loop will always execute. But PREV_IP will be reset to 00 each loop... This is a typo and you meant to set prev_ip once, before the loop starts.
"$(date)"
will try execute the output of date command, as a next command. So it will print:
$ "$(date)"
bash: sob, 20 mar 2021, 10:57:02 CET: command not found
And finally, to silence curl, read man curl first and then find out about -s. I use -sS so errors are also visible.
Do not use uppercase variables in your scripts. Prefer lower case variables. Check you scripts with http://shellcheck.net . Quote variable expansions.
I would sleep each loop. Your script could look like this:
#!/bin/bash
prev=""
while true; do
cur=$(curl -sS https://ipinfo.io/ip)
if [ "$prev" != "$cur" ]; then
prev="$cur"
echo "$(date) $cur"
fi
sleep 1
done
that I want the output to direct to a file instead of outputting it into the terminal.
Then research how redirection works in shell and how to use it. The simplest would be to redirect echo output.
echo "$(date) $cur" >> "a_file.txt"
The interval is currently set to 1 second for test purposes. This will change to a higher interval in the final product.
You are still limited with the time it takes to connect to https://ipinfo.io/ip. And from ipinfo.io documentation:
Free usage of our API is limited to 50,000 API requests per month.
And finally, I wrote a script where I tried to use many public services as I found ,get_ip_external for getting external ip address. You may take multiple public services for getting ipv4 address and choose a random/round-robin one so that rate-limiting don't kick that fast.

Piping raw code from github to ruby not working?

I am doing some basic piping of some simple raw code from github to terminal as shown here i.e.
curl https://raw.github.com/leachim6/hello-world/master/r/ruby.rb | ruby
When I try it, it doesn't produce "Hello World", but instead I just see
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0
use
curl -sSL https://raw.github.com/leachim6/hello-world/master/r/ruby.rb | ruby
this should work
Update to explain
this URL is redirecting to
https://raw.githubusercontent.com/leachim6/hello-world/master/r/ruby.rb
so -L option was required to follow the redirection (-L, --location)
this option will make curl redo the request on the new place
sS to hide the progress bar and show errors if happened
to debug curl request you can use -v option which will make you see exactly what is happening

Pentaho "Get file from FTP" times out

Pentaho's get file from FTP step fails randomly. Sometimes it properly downloads the file, sometimes it doesn't returning error:
Error getting files from FTP : Read timed out
The timeout is set to 100 seconds and the read actually fails after less than one seconds.
Contrary to what the Get a file from FTP documentation says about the timeout, it is not in seconds, but in milliseconds.
Change it to any reasonable value like 60000 (1 minute in ms) and your import will work.

Resources