I've got a bash script I use to create a ngrok tunnel, then I use dweet.io to post the tunnel address & port.
If that's meaningless to you, don't worry, essentially I'm using wget --post-data to post a string to an address.
This bash script is auto-started with a cron job.
while true
do
#Gets the internal IP
IP="$(hostname -I)"
#Gets the external IP
EXTERNALIP="$(curl -s https://canihazip.com/s )"
echo "Dweeting IP... "
TUNNEL="$(curl -s http://localhost:4040/api/tunnels)"
echo "${TUNNEL}" > tunnel_info.json
#Gets the tunnel's address and port
TUNNEL_TCP=$(grep -Po 'tcp:\/\/[^"]+' ./tunnel_info.json )
#Pushes all this information to dweet.io
wget -q --post-data="tunnel=${TUNNEL_TCP}&internal_ip=${IP}&external_ip=${EXTERNALIP}" http://dweet.io/dweet/for/${dweet_id_tunnel}
sleep $tunnel_delay
done
This works, however, the directory I start the script from gets spammed with files named
dweet_id_tunnel.1,
dweet_id_tunnel.2,
dweet_id_tunnel.3,
...
These contain the HTTP response from the wget --post-data from dweet.io.
As this script runs regularly, it's rather annoying to have a folder filled with thousands of these responses. I'm not sure why they're even made because I added the -q argument to wget, which should suppress responses.
Any idea what I need to change to stop these files being created?
wget fetches the response and saves it to a file; that's what it does. If you don't want that, add -O /dev/null, or switch to curl which seems to be more familiar to you anyway, as well as more versatile.
The -q option turns off reporting, not downloading (i.e. progress messages etc, similar to curl -s).
Related
I'm working on a bash script to connect to a server via SSH that is running sish (https://github.com/antoniomika/sish). This will essentially create a port forward on the internet like ngrok using only SSH. Here is what happens during normal usage.
The command:
ssh -i ./tun -o StrictHostKeyChecking=no -R 5900:localhost:5900 tun.domain.tld sleep 10
The response:
Starting SSH Forwarding service for tcp:5900. Forwarded connections can be accessed via the following methods:
TCP: tun.domain.tld:43345
Now I need to send the ssh command to the background and figure out some way of capturing the response from the server as a variable so that I can grab the port that sish has assigned and send that somewhere (probably a webhook). I've tried a few things like using -f and piping to a file or named pipe and trying to cat it, but the issue is that the piping to the file never works and although the file is created, it's always empty. Any assistance would be greatly appreciated.
If you're running a single instance of sish (and the tunnel you're attempting to define) you can actually have sish bind the specific part you want (in this case 5900).
You just set the --bind-random-ports=false flag on your server command in order to tell sish that it's okay to not use random ports.
If you don't want to do this (or you have multiple clients that will expose this same port), you can use a simple script like the following:
#!/bin/bash
ADDR=""
# Start the tunnel. Use a phony command to tell ssh to clean the output.
exec 3< <(ssh -R 5900:localhost:5900 tun.domain.tld foobar 2>&1 | grep --line-buffered TCP | awk '{print $2; system("")}')
# Get our buffered output that is now just the address sish has given to us.
for i in 1; do
read <&3 line;
ADDR="$line"
done
# Here is where you'd call the webhook
echo "Do something with $ADDR"
# If you want the ssh command to continue to run in the background
# you can omit the following. This is to wait for the ssh command to
# exit or until this script dies in order to kill the ssh command.
PIDS=($(pgrep -P $(pgrep -P $$)))
function killssh() {
kill ${PIDS[0]}
}
trap killssh EXIT
while kill -0 ${PIDS[0]} 2> /dev/null; do sleep 1; done;
sish also has an admin api which you can scrape. The information on that is available here.
References: I build and maintain sish and use it myself (as well as a similar type of script).
Hi I have the following simple script to read some URLs from the text and post it to another text file with response.
#!/bin/bash
while read url
do
urlstatus=$(curl -o /dev/null --silent --head --write-out '%{http_code}' "$url")
echo "$url"
echo "$url $urlstatus" >> urlstatus.txt
done < $1
As an example I am trying following link:
www.proddigia.com/inmueble/pisos/en-venta/el-putget-i-el-farro/sarria-sant-gervasi/barcelona/6761
However I get 0 as response. When I check with google I get 200. Am I missing something in script?
Zero is not a valid HTTP response code.
If curl is unable to establish a HTTP connection to the server, or if the server (somehow) fails to deliver a well-formed HTTP response message, there will be no "http code" to return in that variable. Zero is what you would probably see in that scenario.
It could also be that the value of $url that you are using is invalid. For example, if the URL is enclosed in < and > characters, it curl won't understand it. I would expect a zero in that case too.
The problem is that --silent is telling curl to throw away all of the error messages, so it can't tell you what the problem is.
I suggest that you see what you get by running the following command:
curl -o /dev/null --head "$url"
with the identical url string to the one you are currently using.
I just figured out that if you use txt file created in windows OS it does not work as expcted in ubuntu. That was the reason that I got 0. You need to create the txt file in Ubuntu and copy the links over there. Thanks for the answers anyway.
I have a cronjob getting a list of prices from a website in JSON format and copying it into the right folder and it looks like this:
curl 'https://somesite.web/api/list.json' > ~/list.json.tmp && cp ~/list.json.tmp /srv/www/list.json > /dev/null
The problem is that a couple of times the website was down while the cron was trying to get the list and got an empty JSON file. To prevent this in the future, is there a way to make the cron only copy the file if it's not empty (no cp option to do this)? or should I create a script to do that and call the script after getting the list?
Maybe curl --fail will accomplish what you want? From the man page:
-f, --fail
(HTTP) Fail silently (no output at all) on server errors. This is mostly done to better enable scripts etc to better deal with failed attempts. In normal cases when an HTTP server fails to deliver a document, it returns an HTML document stating so (which often also describes why and more). This flag will prevent curl from outputting that and return error 22.
This would cause curl to exit with a failure code, and thus the && in your statement would not execute the copy.
curl ... && [ -s ~/list/json.tmp ] && cp ~/list/json.tmp /srv/www/list.json
The -s test is true if the named file exists and is not empty.
(Incidentally, the > /dev/null redirection is not necessary. The cp command might print error messages to stderr, but it shouldn't print anything to stdout, which is what you're redirecting.)
I've done my research and tried lots of way but to no avail, i still could not get my postfix mail to run the script.
content of /etc/aliases
test2: "|/home/testscript.sh"
content of /home/testscript.sh Note: i've tried many kind of ways in the script. even a simple echo does not work.
#!/bin/sh
read msg
echo $MSG
i've tried running the script and it works fine.
So would you tell that it's working?
Even if you successfully direct mail to the script, you're not going to see the output of the "echo" command. If you expect to get an email response from the script, the script will need to call out to /bin/mail (or sendmail or contact an SMTP server or something) to generate the message. If you're just looking to verify that it's working, you need to create some output where you can see it -- for example, by writing the message to the filesystem:
#!/bin/sh
cat > /tmp/msg
You should also look in your mail logs (often but not necessarily /var/log/mail) to see if there are any errors (or indications of success!).
I am using a shell script in Jenkins that, at a certain point, uploads a file to a server using curl. I would like to see whatever output curl produces but also check whether it is the output I expect. If it isn't, then I want to set the shell error code to > 0 so that Jenkins knows the script failed.
I first tried using curl -f, but this causes the pipe to be cut as soon as the upload fails and the error output never gets to the client. Then I tried something like this:
curl ...params... | tee /dev/tty | \
xargs -I{} test "Expected output string" = '{}'
This works from a normal SSH shell but in the Jenkins console output I see:
tee: /dev/tty: No such device or address
I'm not sure why this is since I thought Jenkins was communicating with the slave using a normal SSH shell. In any case, the whole xargs + test thing strikes me as a bit of a hack.
Is there a way to accomplish this in Jenkins so that I can see the output and also test whether it matches a specific string?
When Jenkins communicates with slave via SSH, there is no terminal allocated, and so there is no /dev/tty device for that process.
Maybe you can send it to /dev/stderr instead? It will be a terminal in an interactive session and some log file in non-interactive session.
Have you thought about using the Publish over SSH Plugin instead of using curl? Might save you some headache.
If you just copy the file from master to slave there is also a plugin for that, copy to slave Plugin.
Cannot write any comments yet, so I had to post it as an answer.