How to assure the selection of an open port in shell - bash

So I have a script that creates a tunnel. To do that it uses random ports.
This is the logic for random port generation
RPORT=1
while [ $RPORT -lt 2000 ]
do
RPORT=$[($RANDOM % 3000) + 1]
done
This is good only if the port that it selects isn't in use. If that port is active, I am unable to access that server while that port is being used.
I want to do something like this
while [netsat -nat | grep $RPORT] = true
do
RPORT=$[($RANDOM % 3000) + 1]
So I want to check first if that port is in use, if so, search for another random port, check if it is in use, if no then use it.
Thank you very much in advance for you time and help!

function random_unused_port {
(netstat --listening --all --tcp --numeric |
sed '1,2d; s/[^[:space:]]*[[:space:]]*[^[:space:]]*[[:space:]]*[^[:space:]]*[[:space:]]*[^[:space:]]*:\([0-9]*\)[[:space:]]*.*/\1/g' |
sort -n | uniq; seq 1 1000; seq 1 65535
) | sort -n | uniq -u | shuf -n 1
}
RANDOM_PORT=$(random_unused_port)
This was the function that helped me out!
Thank you Nahuel Fouilleul for the link!

To fix the answer, also because port from 1 to 1000 are reserved seq starts at 1001
grep -F -x -v -f <(
netstat --listening --all --tcp --numeric |
sed '1,2d; s/[^[:space:]]*[[:space:]]*[^[:space:]]*[[:space:]]*[^[:space:]]*[[:space:]]*[^[:space:]]*:\([0-9]*\)[[:space:]]*.*/\1/g' |
sort -nu
) <(seq 1001 65536) | shuf -n 1

Related

How can I process date strings in bash?

Does anyone have any idea how I could process input like this with bash? I would like to convert absolute time to relative time. My approach works but is VERY messy. Can anyone do better? Is there a cleaner way to do this?
Input:
| 2020-08-01 15:35:47.446 | message 1 |
| 2020-08-01 15:35:48.446 | hi these |
| 2020-08-01 15:31:47.446 | do stuff now! |
Output: Shows the time difference in milliseconds
0 message 1
1000 hi these
60000 do stuff now!
Working (very dirty) approach:
while read line;
do echo $(echo "$(echo "$line" | cut -d' ' -f3 | cut -d':' -f2 | head -1) * 60000 + $(echo "$line" | cut -d' ' -f3 | cut -d':' -f3 | head -1) * 1000 - $baseval" | bc) $(echo "$line" | cut -d'|' -f3) ;
done < file.log
Looks like the question ask to move a series of abs timestamp to relative timestamp, using 'baseval' as the zero point in time.
It is possible to use date command (using the '+%S' to get seconds past epoch) to simplify calcualtion. If the file has many lines, this solution might not be ideal, as it calls the 'date' process for each line.
Worth noting some of the complexities is with parsing the input format - combination of fixed + delimited column. Code uses bash 'IFS' to split the line into components.
#! /bin/bash
function relative_time_ms {
# Convert inputinto two tokens - relative seconds + nanoseconds
local dd=($(date '+%s %N' -d "$1"))
echo $((dd[0]*1000 + dd[1]/1000000 - baseval))
}
while IFS='|' read x ts msg ; do
rel_time=$(relative_time_ms "$ts")
echo "$rel_time | $msg"
done < file.log
Output:
0 | message 1
1000 | hi these
-240000 | do stuff now!

From awk output, how to cut or trim characters in columns

At the moment
I want to trim .fmbi1a5nn9sp5o4qy3eyazeq5.eddvrl9sa8t448pb38vibj8ef: and .ilwio0k43fgqt4jqzyfadx19v: so the output take less space :)
First step:
docker ps --format "{{.Names}}: {{.Status}}" | sort -k1 | column -t
mon_node-exporter.fmbi1a5nn9sp5o4qy3eyazeq5.eddvrl9sa8t448pb38vibj8ef: Up 7 days
mon_prometheus.1.ilwio0k43fgqt4jqzyfadx19v: Up 7 days
I know
I can do something like:
docker ps --format "{{.Names}}: {{.Status}}" | sort -k1 | rev | cut -d"." -f2- | rev
mon_node-exporter.fmbi1a5nn9sp5o4qy3eyazeq5
mon_prometheus.1
The issue
is that I'm losing the other columns :-/
Idea
It would sound logical to do something like this (with awk) but it does not work. Any ideas?
docker ps --format "{{.Names}} : {{.Status}}" | sort -k1 | awk '{(print $1 | rev | cut -d"." -f2- | rev),$2,$3,$4,$5,$6}' | column -t
Thank you in advance!
P
to cut the last dot extension
$ docker ... | sort | awk '{sub(/\.[^.]*$/,"",$1)}1' file | column -t
mon_node-exporter.fmbi1a5nn9sp5o4qy3eyazeq5 Up 7 days
mon_prometheus.1 Up 7 days
or, delete anything longer than 20 chars after a dot.
$ ... | sed -e 's/\(\.[a-z0-9:]\{20,\}\)* / /' | column -t
mon_node-exporter Up 7 days
mon_prometheus.1 Up 7 days
Works! This trick will make my life so much easier.
(I removed file)
docker ps --format "{{.Names}}: {{.Status}}" | sort -k1 | awk '{sub(/\.[^.]*$/,"",$1)}1' | column -t;
mon_grafana.1 Up 24 hours
mon_node-exporter.fmbi1a5nn9sp5o4qy3eyazeq5 Up 23 hours
Question #2:
Now how would you proceed to cut the characters after the first dot?
Cheers!

Foreach loop in bash

I have two files, one with about 100 root domains, and second file with URLs only. Now I have to filter that URL list to get third file which contains only URLs that have domains from the list.
Example of URL list:
| URL |
| ------------------------------|
| http://github.com/name |
| http://stackoverflow.com/name2|
| http://stackoverflow.com/name3|
| http://www.linkedin.com/name3 |
Example of word list:
github.com
youtube.com
facebook.com
Resut:
| http://github.com/name |
My goal is to filter out whole row where URL contain specific word. This is what I tried:
for i in $(cat domains.csv);
do grep "$i" urls.csv >> filtered.csv ;
done
Result is strange, I've got some of the links, but not all of them that contain root domains from the first file. Then I tried to do the same thing with python and saw that bash doesn't do what I wanted, I've got better result with python script, but it takes more time to write python script than running bash commands.
How shoud I accomplish this with bash in further ?
Using grep:
grep -F -f domains.csv url.csv
Test Results:
$ cat wordlist
github.com
youtube.com
facebook.com
$ cat urllist
| URL |
| ------------------------------|
| http://github.com/name |
| http://stackoverflow.com/name2|
| http://stackoverflow.com/name3|
| http://www.linkedin.com/name3 |
$ grep -F -f wordlist urllist
| http://github.com/name |

Simplify lots of SED command

I have the following command that I use to rewrite some maxscale output to be able to use it in other software:
maxadmin list servers | sed -r 's/[^a-z 0-9]//gi;/^\s*$/d;1,3d;' | awk '$1=$1' | cut -d ' ' -f 1,5 | sed -e 's/ /":"/g' | sed -e 's/\(.*\)/"\1"/' | tr '\n' ',' | sed 's/.$/}\n/' | sed 's/^/{/'
I am thinking this is way to complex for what I want to do, but I am not able to see a simpler version of this myself. What I want is to rewrite this (output of maxadmin list servers):
Servers.
-------------------+-----------------+-------+-------------+--------------------
Server | Address | Port | Connections | Status
-------------------+-----------------+-------+-------------+--------------------
svr_node1 | 192.168.178.1 | 3306 | 0 | Master, Synced, Running
svr_node2 | 192.168.178.1 | 3306 | 0 | Slave, Synced, Running
svr_node3 | 192.168.178.1 | 3306 | 0 | Slave, Synced, Running
-------------------+-----------------+-------+-------------+--------------------
Into this:
{"svrnode1":"Master","svrnode2":"Slave","svrnode3":"Slave"}
My command does a good job but as I said, there should be a simpler way with less sed commands being run hopefully.
You can use awk, like this:
json.awk
BEGIN {
printf "{"
}
# Everything after line for and before the last ------ line
# plus the last empty line (if any).
NR>4&&!/^([-]|$)/{
sub(/,/,"",$9) # Remove trailing comma
printf "%s\"%s\":\"%s\"",s,$1,$9
s="," # Set comma separator after first iteration
}
END {
print "}"
}
Run it like this:
maxadmin list servers | awk -f json.awk
Output:
{"svr_node1":"Master","svr_node2":"Slave","svr_node3":"Slave"}
In comments there came up the question how to achieve that without an extra json.awk file:
maxadmin list servers | awk 'BEGIN{printf"{"}NR>4&&!/^([-]|$)/{sub(/,/,"",$9);printf"%s\"%s\":\"%s\"",s,$1,$9;s=","}END{print"}"}'
Ugly, but works. ;)
If you want to put this into a shell script, consider a multiline version like this:
maxadmin list servers | awk '
BEGIN{printf"{"}
NR>4&&!/^([-]|$)/{
sub(/,/,"",$9)
printf"%s\"%s\":\"%s\"",s,$1,$9
s=","
}
END{print"}"}'

Bash: Limit output of ls and grep

Let me present an example and than try to explain my problem:
noob#noob:~/Downloads$ ls | grep srt$
Elementary - 01x01 - Pilot.LOL.English.HI.C.orig.Addic7ed.com.srt
Haven - 01x01 - Welcome to Haven.DVDRip.SAiNTS.English.updated.Addic7ed.com.srt
Haven - 01x01 - Welcome to Haven.FQM.English.HI.C.updated.Addic7ed.com.srt
Supernatural - 08x01 - We Need to Talk About Kevin.LOL.English.HI.C.updated.Addic7ed.com.srt
The Big Bang Theory - 06x02 - The Decoupling Fluctuation.LOL.English.HI.C.orig.Addic7ed.com.srt
Torchwood - 1x01 - Everything changes.0TV.English.orig.Addic7ed.com.srt
Torchwood - 1x01 - Everything changes.divx.English.updated.Addic7ed.com.srt
Now I only want to delete the first four results of the above command. Normally if I have to delete all the files I would do ls | grep srt$ | xargs -I {} rm {} but in this case I only want to delete the top four.
So, how can limit the output of ls and grep or suggest me an alternate way to achieve this.
You can pipe your commands to head -n to limit to n lines:
ls | grep srt | head -4
$ for i in `seq 1 345`; do echo $i ;done | sed -n '1,4p'
1
2
3
4
geee: ~
$ for i in `seq 1 345`; do echo $i ;done | sed -n '335,360p'
335
336
337
338
339
340
341
342
343
344
345
If you don't have too many files, you can use a bash array:
matching_files=( *.srt )
rm "${matching_files[#]:0:4}"

Resources