Connecting to website without curl or bash - bash

I'm trying to connect to a website like this "examplesite.com:9000/link" using a method like this:
echo -e "GET http://google.com HTTP/1.0\n\n" | nc google.com 80 > /dev/null 2>&1
I've seen people ping google with the above code.
I can use curl or wget to go to that site but I don't wanna use those methods because I'm using a microcontroller that doesn't support curl or wget.
Could someone explain how the above code is working?

nc opens a connection to port 80 on google.com
The echo statement is a valid GET request, using HTTP/1.0 protocol
> /dev/null 2>&1 redirects both stdout and stderr, producing no output
You can tell success by the exit code, in $? (value of 0 means success)
You could write this shorter:
echo -e "GET /\n\n" | nc google.com 80
And more portable (without echo -e):
printf "GET /\n\n" | nc google.com 80
Or more portable but still with echo:
{ echo GET /; echo; } | nc google.com 80

Related

Error "write(stdout): Broken pipe" when using GNU netcat with named pipe

I have a simple shell script sv that responds to HTTP requests with a short message:
#!/bin/sh
echo_crlf() {
printf '%s\r\n' "$#"
}
respond() {
body='Hello world!'
echo_crlf 'HTTP/1.1 200 OK'
echo_crlf 'Connection: close'
echo_crlf "Content-Length: $(echo "$body" | wc -c)"
echo_crlf 'Content-Type: text/plain'
echo_crlf
echo "$body"
}
mkfifo tube
respond < tube | netcat -l -p 8888 > tube
rm tube
When I start the script and send a request,
everything looks right on the client side:
$ ./sv
$ curl localhost:8888
Hello world!
$
but the script prints the following error:
$ ./sv
write(stdout): Broken pipe
$
I am running this script on Linux,
using GNU's implementation of netcat and coreutils.
I've tried running this script with both dash and bash; the same error occurs.
What is the cause of this error and how can I avoid it?
Edit: It seems that the error was caused
by leaving out the read command in respond
in simplifying my code for this question.
That or the lack of a Connection: close header
when testing with a web browser causes this error message.
You're writing to the FIFO "tube" but no one is reading from it. You can try like this:
{ respond; cat tube > /dev/null; } | netcat -l -p 8888 > tube
I don't get the point of using the FIFO here. The following would just work:
respond | netcat -l -p 8888 > /dev/null

netcat to return the result of a command (run when connection happens)

I want to use netcat to return the result of a command on a server. BUT here is the trick, I want the command to run when the connection is made. Not when I start the netcat.
Here is a simplified single shell example. I want the ls to run when I connect to 1234 (which I would normally do from a remote server, obviously this example is pointless I could do an ls locally)
max $ ls | nc -l 1234 &
[1] 36685
max $ touch file
max $ nc 0 1234
[1]+ Done ls | nc -l 1234
max $ ls | nc -l 1234 &
[1] 36699
max $ rm file
max $ nc 0 1234
file
[1]+ Done ls | nc -l 1234
You can see the ls runs when I start the listener, not when I connect to it. So in the first instance I had no file when I started it and created the file, then made the connection and it reported the state of the filesystem when the listen command started (empty), not the current state. And the second time around when file was already gone it showed it as still present.
Something similar to the way it works when you redirect from a file. eg:
max $ touch file
max $ nc -l 1234 < file &
[1] 36768
max $ echo content > file
max $ nc 0 1234
content
[1]+ Done nc -l 1234 < file
The remote connection gets the latest content of the file, not the content when the listen command started.
I tried using the "file redirect" style with a subshell and that doesn't work either.
max $ nc -l 1234 < <(cat file) &
[1] 36791
max $ echo bar > file
max $ nc 0 1234
content
[1]+ Done nc -l 1234 < <(cat file)
The only thing I can think of is adding my command |netcat to xinetd.conf/systemd... I was probably going to have to add it to systemd as a service anyway.
(Actual thing I want to do : provide the list of users of the VPN to a network port for a remote service to get a current user list. My command to generate the list looks like :
awk '/Bytes/,/ROUTING/' /etc/openvpn/openvpn-status.log | cut -f1 -d. | tail -n +2 | tac | tail -n +2 | sort -b | join -o 2.2 -j1 - <(sort -k1b,1 /etc/openvpn/generate-new-certs.sh)
)
I think you want something like this, which I actually did with socat:
# Listen on port 65500 and exec '/bin/date' when a connection is received
socat -d -d -d TCP-LISTEN:65500 EXEC:/bin/date
Then in another Terminal, a few seconds later:
echo Hi | nc localhost 65500
Note: For macOS users, you can install socat with homebrew using:
brew install socat

how to check if internet connection is available in MacOS Terminal?

I used Ping command:
ping -c1 -W1 8.8.8.8
Works good if online,
But in case of internet is't available or LAN is off the terminal takes long time to reply with result..
I want terminal reply in maximum time 1 or 0.5 second to use result in my
ExtendScript code:
var command = "ping -c1 -W1 8.8.8.8";
var result= system.callSystem(command);
I tried to set timeout using -t or -W but failed
Edit:
thanks to Philippe the solution is:
nc -z www.google.com 80 -G1
If you are on MacOS Terminal, you can check internet connection by :
nc -z www.google.com 80

Monitoring URL Requests from Shell Script

I am required to create a shell script in Mac, which will monitor and if a specified URL (for example, *.google.com) is hit from any browser or program, shell script will prompt or do an operation. Could anyone guide how to do this?
Those environment variables will set proxy settings to my programs, like curl, wget and browser.
$ env | grep -i proxy
NO_PROXY=localhost,127.0.0.0/8,::1
http_proxy=http://138.106.75.10:3128/
https_proxy=https://138.106.75.10:3128/
no_proxy=localhost,127.0.0.0/8,::1
Here you can see that curl respects it and always connects on my proxy, in your case your proxy settings will be like this: http://localhost:3128.
$ curl -vvv www.google.com
* Rebuilt URL to: www.google.com/
* Trying 138.106.75.10...
* Connected to 138.106.75.10 (138.106.75.10) port 3128 (#0)
> GET http://www.google.com/ HTTP/1.1
> Host: www.google.com
> User-Agent: curl/7.47.0
> Accept: */*
> Proxy-Connection: Keep-Alive
>
< HTTP/1.1 302 Found
< Cache-Control: private
< Content-Type: text/html; charset=UTF-8
< Referrer-Policy: no-referrer
< Location: http://www.google.se/?gfe_rd=cr&ei=3ExvWajSGa2EyAXS376oCw
< Content-Length: 258
< Date: Wed, 19 Jul 2017 12:13:16 GMT
< Proxy-Connection: Keep-Alive
< Connection: Keep-Alive
< Age: 0
<
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>302 Moved</TITLE></HEAD><BODY>
<H1>302 Moved</H1>
The document has moved
here.
</BODY></HTML>
* Connection #0 to host 138.106.75.10 left intact
Install apache on your machine and configure it as forward proxy, like the example below, the trick is combine mod_actions and mod_proxy:
Listen 127.0.0.1:3128
<VirtualHost 127.0.0.1:3128>
Script GET "/cgi-bin/your-script.sh"
ProxyRequests On
ProxyVia On
<Proxy http://www.google.com:80>
ProxySet keepalive=On
Require all granted
</Proxy>
</VirtualHost>
I never tried it, but theoretically it should work.
If you want to monitor or capture network traffic, tcpdump is your friend- requires no proxy servers, additional installs, etc., and should work on stock Mac OS as well as other *nix variants.
Here's a simple script-
sudo tcpdump -ql dst host google.com | while read line; do echo "Match found"; done
The while read loop will keep running until manually terminated; replace echo "Match found" with your preferred command. Note that this will trigger multiple times per page load; you can use tcpdump -c 1 if you only want it to run until it sees relevant traffic.
As Azize mentions, You could also have tcpdump outputting to a file in one process, and monitor that file in another. incrontab is not available on Mac OS X; you could wrap tail -f in a while read loop:
sudo tcpdump -l dst host google.com > /tmp/output &
tail -fn 1 /tmp/output | while read line; do echo "Match found"; done
There's a good similar script available on github. You can also read up on tcpdump filters if you want to make the filter more sophisticated.
first we have to put url's in sample.txt file that we have to monitor.
FILEPATH=/tmp/url
INFILE=$FILEPATH/sample.txt
OUTFILE=$FILEPATH/url_status.txt
> $OUTFILE
for file in `cat $INFILE`
do
echo -n "$file |" >> $FILEPATH/url_status.txt
timeout 20s curl -Is $file |head -1 >> $FILEPATH/url_status.txt
done
grep '200' $OUTFILE|awk '{print $1" Url is working fine"}' > /tmp/url/working.txt
grep -v '200' $OUTFILE|awk '{print $1" Url is not working"}' > /tmp/url/notworking.txt
COUNT=`cat /tmp/url/notworking.txt | wc -l`
if [ $COUNT -eq 0 ]
then
echo "All url working fine"
else
echo "Issue in following url"
cat /tmp/url/notworking.txt
fi

UDP sniffer similar to netcat TCP sniffer

From the netcat man page one can sniff / save a TCP stream using something like:
mkfifo data
cat data | nc -l $port | tee -a $fn1 | nc $server $port | tee -a $fn2 > data
So I tried something like the following to do the same for UDP:
mkfifo data
cat data | nc -lu $port | tee -a $fn1 | nc -u $server $port | tee -a $fn2 > data
But it fails miserably, which I assume is because of the race conditions between writing to and reading from data and the pipe out of the tee command meaning I can't guarantee that UDP packets are transmitted one-by-one.
Is there an existing command or tool I can use to sniff UDP conversations without altering the packets? Preferably a 1-liner bash command, though a short ruby or python or whatever script works too.
socat -x "udp-listen:$port" "udp:$server:$host" 2> logfile

Resources