I'm working on a mac running El Capitan.
For a project I've been working on I'm trying to write a simple script to log ping times. I've come to the conclusion it isn't as simple as I'd thought. My first problem was "Ambiguous redirect" when using variables. I've corrected that, using quotes around variables, with help from $INPUT Ambiguous redirect
But now I get a different error when running the following script:
#!/bin/sh
set PINGDELAY=1.5
set PINGIP=google.nl
set PINGLOG=~/Library/Logs/doctorping.log
sudo ping -i "$PINGDELAY" "$PINGIP" | perl -nle 'use Time::Piece; BEGIN {$|++} print localtime->datetime, " ", $_' >> "$PINGLOG"
The error is
ping: invalid timing interval: `'
It's probably something I've overlooked but I'm a real noob when it comes to scripting and programming.
My goal isn't to process text or extract bits of it, it's just to monitor connections and have the results written to the log. It'd probably be wise to limit the number of log lines, but I'll get to that later. (of course would be appreciated if someone could point me in the right direction, but first things first)
Thanks!
The set command is not to set shell variables; it's used to set shell execution options and/or replace the script's argument list. Just leave it off. Also, it's best to use lowercase (or mixed-case) variable names to avoid accidentally using one of the variables that means something special to the shell. Here's what I get:
#!/bin/sh
pingdelay=1.5
pingIP=google.nl
pinglog=~/Library/Logs/doctorping.log
sudo ping -i "$pingdelay" "$pingIP" | perl -nle 'use Time::Piece; BEGIN {$|++} print localtime->datetime, " ", $_' >> "$pinglog"
It works fine if you use bash like this rather than sh:
#!/bin/bash -xv
PINGDELAY=1.5
PINGIP=google.nl
PINGLOG=~/Library/Logs/doctorping.log
sudo ping -i "$PINGDELAY" "$PINGIP"
The -xv is just for debugging - you can remove it safely.
Related
I have a bash script that needs to connect to another server for parts of it's execution. I have tried many of the standard instructions and syntaxes for executing ssh commands, but with little progress.
On the remote server, I need to source a shell script that contains several env parameters for some software. One of these parameters are then used in a filepath to point to an executable, which contains a function ' -lprojects ' that can list the projects for the software on that server.
I have verified that running the commands on the server itself works multiple times. My issue is when I try to run the same commands over SSH. If I use the approach where I use the env variable for the filepath, it shows that the variable is null in the filepath, giving a file/directory not found error. If I hard-code the filepath to point to the executable, it gives me an error saying that the shell script is not sourced (which I assume it needs for other functions and apis for the executable to reveal it's -lprojects function)
Here is how the code looks like somewhat:
ssh remote.server 'source /filepath/remotescript.sh'
filelist=$(ssh remote.server $REMOTEVARIABLE'/bin/executable -lprojects')
echo ${filelist[#]}
for file in $filelist
do
echo $file
ssh SERVER2 awk 'something' /filepath/"$file"/somefile.txt | sed 'something' >> filepath/values.csv;
done
As you can see, I then also need to loop through the contents of the -lprojects output in the remote.server, do some awk and sed on the files to extract the wanted text (this works), but then I need to write that back to the client (local server) values.csv file. This is more generic, as there will be several servers I have to do this for, but all of them have to write to the same .csv file. For simplicity, you can just regard this as a one remote server case, since it is vital I get it working for at least one now in the beginning.
Note that I also tried something like:
ssh remote.server << EOF
'source /filepath/remotescript.sh'
filelist=$(ssh remote.server $REMOTEVARIABLE'/bin/executable -lprojects')
EOF
But with similar results. Also placing the single-quotes in the filelist both before and after the remotevariable, etc.
How do I go about properly doing this?
To access the environment variable, you must source the script that defines the environment within the same SSH call as the one where you are using it, otherwise, you're running your commands in two different shells which are unrelated:
filelist=$(ssh remote.server 'source /filepath/remotescript.sh; $REMOTEVARIABLE/bin/executable -lprojects')
Assuming executable outputs one file name per line, you can use readarray to achieve the effect :
readarray -t filelist < <(ssh remote.server '
source /filepath/remotescript.sh
$REMOTEVARIABLE/bin/executable -lprojects
'
)
echo ${filelist[#]}
for file in $filelist
do
echo $file
ssh SERVER2 awk 'something' /filepath/"$file"/somefile.txt | sed 'something' >> filepath/values.csv;
done
I am running ubuntu 13.10 and want to write a bash script that will execute a given task at non-pre-determined time intervals. My understanding of this is that cronjobs require me to know when the task will be performed again. Thus, I was recommended to use "at."
I'm having a bit of trouble using "at." Based on some experimentation, I've found that
echo "hello" | at now + 1 minutes
will run in my terminal (with and without quotes). Running "atq" results in my computer telling me that the command is in the queue. However, I never see the results of the command. I assume that I'm doing something wrong, but the manpages don't seem to be telling me anything useful.
Thanks in advance for any help.
Besides the fact that commands are run without a terminal (output and input is probably redirected to /dev/null), your command would also not run since what you're passing to at is not echo hello but just hello. Unless hello is really an existing command, it won't run. What you want probably is:
echo "echo hello" | at now + 1 minutes
If you want to know if your command is really running, try redirecting the output to a file:
echo "echo hello > /var/tmp/hello.out" | at now + 1 minutes
Check the file later.
I am running an installation script to install Grails on new machines with GVM.
#!/bin/bash
set -e
source "/Users/mecca831/.gvm/bin/gvm-init.sh"
echo "Install grails"
gvm install grails 2.1.1
GVM returns 1 in this case, which breaks my script. However, the script works if set -e is removed. It returns 0 and the correct prompt will show up. Anyone run into the same problem trying to install Grails with GVM?
Non-trivial scripts have to be specifically written to run with set -e.
gvm-init.sh has not been written to allow this, and breaks when it's enabled.
Consider for example this section:
GVM_DETECT_HTML="$(echo "$GVM_RESPONSE" | tr '[:upper:]' '[:lower:]' | grep 'html')"
if [[ -n "$GVM_DETECT_HTML" ]]; then
...
This isn't good or idiomatic bash code in anyway, but it works well enough by itself. It finds lines containing "html", and sticks them in the variable. Then it checks whether the variable is empty or not.
However, when you enable set -e, the script exits if the variable will be empty, before the script has a chance to look at it and account for that.
There's not really anything you can do about this, short of rewriting gvm-init.sh or set +e before you run any affected code.
I would like to copy a file from a remote machine onto my local machine, up to the first line containing a certain pattern.
Scenario: update my local Bash profile with a part of the remote Bash profile, up to the point in which my admin has verified it.
Is there a better way (I guess there likely is!) than this quick "shell scripting" hack?
ssh tinosino#robottinosino-wifi cat /Users/tinosino/.profile | sed '/Verify this script further than this point/,$ d' > /home/tinosino/Desktop/tinosino_bash_profile.sh
Remote machine: robottinosino-wifi (OSX)
Sentinel line: Verify this script further than this point
I can use basic shell scripting, preferably in Bash as it's the default, or the most common diff/source-control bins..
The idea, you guess it, is to ultimately automate this process. Cron? Any idea as to how you would do this? The start of my Bash profile should come from the server, the "rest" is free for me to customise.
Prev failed attempts of mine:
using head
using process substituion <( ... )
using grep
using a local named pipe (this was fun: the named pipe needs a program generating its text though, executing something like the cat->sed line above)
Important note: what would be highly desirable is for the remote system not to go through the entire file, but to truncate the filter once it "sees" the sentinel line.. If pattern is in line #300 of 1,000,000,000.. just go over 300 lines.
The problem is that your sed command is structured to read through the entire file.
You can use sed -n '/Verify this script/q; p' to instead quit once the line is found:
ssh tinosino#robottinosino-wifi cat /Users/tinosino/.profile | sed -n '/Verify this script/q; p' > /home/tinosino/Desktop/tinosino_bash_profile.sh
Or without the useless use of cat, which doesn't make a significant difference in this case, but which will transfer less data if you want to remove multiple sections later:
ssh tinosino#robottinosino-wifi "sed -n '/Verify this script/q; p' /Users/tinosino/.profile" > /home/tinosino/Desktop/tinosino_bash_profile.sh
Just perform the filtering on the remote server.
ssh tinosino#robottinosino-wifi sed -n 'p;/Verify.../q' /Users/tinosino/.profile \
>>/home/tinosino/Desktop/tinosino_bash_profile.sh
The -n flag and the p and q commands together print only the lines up to, but not including, the first line that starts with "Verify...".
I have a strange issue, relating to running a BASH script via cron (invoked via crontab -e).
Here is the script:
#!/bin/bash
SIG1="$(iwconfig wlan0 | awk '/Quality=/ { print $2} ' | cut -c 9-10)"
SIG2="$(iwconfig wlan0 | awk '/Quality=/ { print $2} ' | cut -c 12-13)"
echo "$SIG1:$SIG2" >> test.txt
exit
When run from the commandline, I get the expected output of 45:70 echoed to the end of the text file. However, when I run the script via cron (using crontab -e) and the following entry:
* * * * * bash /home/rupert/test.sh
I just get the colon (:) echoed to the text file, the values SIG1 and SIG2 aren't created and I have no idea why. Why would running via cron mess up the script?
FWIW, here is the output of iwconfig wlan0 with no additional processing:
wlan0 IEEE 802.11abgn ESSID:"plumternet"
Mode:Managed Frequency:2.452 GHz Access Point: 00:18:84:2A:68:AD
Bit Rate=54 Mb/s Tx-Power=15 dBm
Retry long limit:7 RTS thr:off Fragment thr:off
Power Management:off
Link Quality=46/70 Signal level=-64 dBm
Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0
Tx excessive retries:0 Invalid misc:0 Missed beacon:0
I am doing all this because I want to display the WiFi Link Quality value "46/70" on an LCD screen and the program I use does this by reading a text file. However, when run via cron, the values get lost...???
I am using cut -c 9-10 and cut -c 12-13 because I was thinking the "/" might be causing an issue in the script, I'd be happy to just use cut -c 9-13, but I thought it might fix the issue, but it didn't.
Help!!
Cool, thanks to you guys, I realised it was a PATH problem, simply giving the full path to iwconfig (/sbin/iwconfig) fixed it. Here is a pic of the LCD screen now showing all the correct info:
http://img835.imageshack.us/img835/4175/20100825122413.jpg
you need to give the full path to any commands executed via cron. cron runs commands detached from any terminal which means you need to set the environment correctly. cut is probably available but give the absolute path to iwconfig and awk
Change the permission of this file to 777
chmod 777 /home/rupert/test.sh
Maybe this will help.
I don't know the exact steps to prevent this (in a clean manner) from happening (I'm not all that of a linux-expert), however, this looks like a permissions problem to me. The user as which cron jobs run isn't allowed to execute one of the commands you are letting execute.
If you fix the permissions, I think it may run just fine!