Bash Upload file over Netcat - bash

Im trying to write a bash script that will curl a file and send it to my server over netcat then sleep (10) and send another file and sleep for 1hour then repeat all the process.
the first file is uploaded successfully but the second file : NO, i don't know what wrong with my code.
Ant help will be appreciated.
#!/bin/bash
file="curl -L mydomain.net/file.txt -o file.php"
file2="curl -L mydomain.net/file2.txt -o file2.php"
while true
do
if cat <(echo "${file}") | nc -u 120.0.0.1 4444 -w 1
echo -e "\e[92m[*][INFO] file1 uploaded"
sleep 10
then
cat <(echo "${file2}") | nc -u 120.0.0.1 4444 -w 1
echo -e "\e[91m[*][INFO] file2 uploaded"
sleep 3600
fi
done

Related

how to terminate a process via key stroke

I have this function on my bash script:
sudo tshark -i eth0 -T fields -e ip.src -e dns.qry.name -Y "dns.qry.name~." -q 1>>log.txt 2>/dev/null &
while true
do
cat log.txt
done
it is capturing ips and domain names in live mode and save them into log file.
how can configure this live mode to be terminated by pressing a key?
Using tee to watch log and send the command to background, then read input to terminate script
tshark -i eth0 -T fields -e ip.src -e dns.qry.name -Y "ip" -q 2>/dev/null | tee log.txt &
read -n1 c && echo "Got key $c"
exit
Note: running the command in a console will terminate it :-p

netcat to return the result of a command (run when connection happens)

I want to use netcat to return the result of a command on a server. BUT here is the trick, I want the command to run when the connection is made. Not when I start the netcat.
Here is a simplified single shell example. I want the ls to run when I connect to 1234 (which I would normally do from a remote server, obviously this example is pointless I could do an ls locally)
max $ ls | nc -l 1234 &
[1] 36685
max $ touch file
max $ nc 0 1234
[1]+ Done ls | nc -l 1234
max $ ls | nc -l 1234 &
[1] 36699
max $ rm file
max $ nc 0 1234
file
[1]+ Done ls | nc -l 1234
You can see the ls runs when I start the listener, not when I connect to it. So in the first instance I had no file when I started it and created the file, then made the connection and it reported the state of the filesystem when the listen command started (empty), not the current state. And the second time around when file was already gone it showed it as still present.
Something similar to the way it works when you redirect from a file. eg:
max $ touch file
max $ nc -l 1234 < file &
[1] 36768
max $ echo content > file
max $ nc 0 1234
content
[1]+ Done nc -l 1234 < file
The remote connection gets the latest content of the file, not the content when the listen command started.
I tried using the "file redirect" style with a subshell and that doesn't work either.
max $ nc -l 1234 < <(cat file) &
[1] 36791
max $ echo bar > file
max $ nc 0 1234
content
[1]+ Done nc -l 1234 < <(cat file)
The only thing I can think of is adding my command |netcat to xinetd.conf/systemd... I was probably going to have to add it to systemd as a service anyway.
(Actual thing I want to do : provide the list of users of the VPN to a network port for a remote service to get a current user list. My command to generate the list looks like :
awk '/Bytes/,/ROUTING/' /etc/openvpn/openvpn-status.log | cut -f1 -d. | tail -n +2 | tac | tail -n +2 | sort -b | join -o 2.2 -j1 - <(sort -k1b,1 /etc/openvpn/generate-new-certs.sh)
)
I think you want something like this, which I actually did with socat:
# Listen on port 65500 and exec '/bin/date' when a connection is received
socat -d -d -d TCP-LISTEN:65500 EXEC:/bin/date
Then in another Terminal, a few seconds later:
echo Hi | nc localhost 65500
Note: For macOS users, you can install socat with homebrew using:
brew install socat

How to cycle through a small set of options within a for loop?

I have a bunch of jobs I need to submit to a job queue. The queue has 8 different machines I can pick from or I can submit to any available server. Sometimes a server may be faulty so I would like to be able to loop through available servers I send my jobs to. A barebones version is below
# jobscript.sh
dir='some/directory/of/files/to/process'
for fn in $(ls $dir); do
submit_job -q server#machine -x python script.py $fn
done
If I don't care what machine the job is sent to I remove the #machine portion so the command is just submit_job -q server -x python script.py $fn.
If I do want to specify the specific machine then I specify which machine by appending a number after machine as server#machine1 then on the next iteration server#machine2 then server#machine2 etc. The output of the script would then look like the following if I only use the first 3 servers
submit_job -q server#machine1 -x python script.py file1
submit_job -q server#machine2 -x python script.py file2
submit_job -q server#machine3 -x python script.py file3
submit_job -q server#machine1 -x python script.py file4
submit_job -q server#machine2 -x python script.py file5
submit_job -q server#machine3 -x python script.py file6
submit_job -q server#machine1 -x python script.py file7
submit_job -q server#machine2 -x python script.py file8
...
The list of available servers is [1, 2, 3, 4, 5, 6, 7, 8] but I would like to additionally specify from the command line a list of servers to ignore so something like
$bash jobscript.sh -skip 1,4,8
which would only cycle through 2, 3, 5, 6, 7 and product the output
submit_job -q server#machine2 -x python script.py file1
submit_job -q server#machine3 -x python script.py file2
submit_job -q server#machine5 -x python script.py file3
submit_job -q server#machine6 -x python script.py file4
submit_job -q server#machine7 -x python script.py file5
submit_job -q server#machine2 -x python script.py file6
submit_job -q server#machine3 -x python script.py file7
submit_job -q server#machine5 -x python script.py file8
submit_job -q server#machine6 -x python script.py file8
...
if flag -skip is not present, just run the command without #machine which will allow the queue to decide where to place the job and the commands look like
submit_job -q server -x python script.py file1
submit_job -q server -x python script.py file2
submit_job -q server -x python script.py file3
submit_job -q server -x python script.py file4
submit_job -q server -x python script.py file5
submit_job -q server -x python script.py file6
submit_job -q server -x python script.py file7
submit_job -q server -x python script.py file8
submit_job -q server -x python script.py file8
...
Something like this should do most of the work for you:
#!/bin/bash
machines=(1 2 3 4 5 6 7 8)
skip_arr=(1 4 8)
declare -a arr
for i in "${machines[#]}"; do
if [[ ! " ${skip_arr[#]} " =~ " $i " ]]; then
arr+=($i)
fi
done
arr_len="${#arr[#]}"
declare -i i=0
for f in $(ls); do
i="i % arr_len"
echo "file is $f, machine is $i"
let i++
done
Right now, I've set it up to go through the current directory, and just echo the values of the machine and filename. Obviously you'll want to change this to actually execute the commands from the right directory.
The last thing you need to do is build up skip_arr from the command line input, and then check if it's empty when you're executing your command.
Hopefully this gets you most of the way there. Let me know if you have any questions about anything I've done here.
Cycle through array of machines
#!/bin/bash
rotate() {
if [[ "$1" = "all" ]]; then
machines=(1 2 3 4 5 6 7 8)
else
machines=($*)
fi
idx=0
max=${#machines[#]}
for ((fn=0; fn<20; fn++)); do
if (( $max > 0 )); then
servernr=${machines[idx]}
((idx=(idx+1) % ${max}))
else
servernr=""
fi
echo "submit -q server${servernr} file${fn}"
done
}
# test
echo "Rotate 0 machines"
rotate
echo "Rotate all machines"
rotate all
echo "Rotate some machines"
rotate 2 5 6

bash can't capture output from aria2c to variable and stdout

I am trying to use aria2c to download a file. The command looks like this:
aria2c --http-user=$USER --http-passwd=$usepw -x 16 -s 100 $urlPath
The command works perfectly from the script when run this way. What I'm trying to do is capture the output from the command to a variable and still display it on the screen in real-time.
I have successfully captured the output to a variable by using:
VAR=$(aria2c --http-user=$USER --http-passwd=$usepw -x 16 -s 100 $urlPath)
With this scenario though, there's a long delay on the screen where there's no update while the download is happening. I have an echo command after this line in the script and $VAR has all of the aria2c download data captured.
I have tried using different combinations of 2>&1, and | tee /dev/tty at the end of the command, but nothing shows in the display in realtime.
Example:
VAR=$(aria2c --http-user=$USER --http-passwd=$usepw -x 16 -s 100 $urlPath 2>&1)
VAR=$(aria2c --http-user=$USER --http-passwd=$usepw -x 16 -s 100 $urlPath 2>&1 | tee /dev/tty )
VAR=$(aria2c --http-user=$USER --http-passwd=$usepw -x 16 -s 100 $urlPath | tee /dev/tty )
VAR=$((aria2c --http-user=$USER --http-passwd=$usepw -x 16 -s 100 $urlPath) 2>&1)
VAR=$((aria2c --http-user=$USER --http-passwd=$usepw -x 16 -s 100 $urlPath) 2>&1 | tee /dev/tty )
VAR=$((aria2c --http-user=$USER --http-passwd=$usepw -x 16 -s 100 $urlPath) 2>&1 ) | tee /dev/tty )
I've been able to use the "2>&1 | tee" combination before with other commands but for some reason I can't seem to capture aria2c to both simultaneously. Anyone had any luck doing this from a bash script?
Since aria2c seems to output to stdout, consider teeing that to stderr:
var=$(aria2c --http-user=$USER --http-passwd=$usepw -x 16 -s 100 $urlPath | tee /dev/fd/2)
The stdout ends up in var while tee duplicates it to stderr, which displays to your screen.

Shell script to grep logs on different host and write the grepped output to a file on Host 1

Shell script needs to
ssh to Host2 from Host1
cd /test/test1/log
grep logs.txt for string error
write the grepped output to a file
and move that file to Host1
This can be accomplished by specifying the -f option to ssh:
ssh user#host -f 'echo "this is a logfile">logfile.txt'
ssh user#host -f 'grep logfile logfile.txt' > locallogfile.txt
cat locallogfile.txt
An example using a different directory and cd changing directories to it:
ssh user#host -f 'mkdir -p foo/bar'
ssh user#host -f 'cd foo/bar ; echo "this is a logfile">logfile.txt'
ssh user#host -f 'cd foo/bar ; echo "this is a logfile">logfile.txt'
ssh user#host -f 'cd foo/bar ; grep logfile logfile.txt' > locallogfile.txt
cat locallogfile.txt

Resources