echo -n fails with piped function - bash

I'm a bit of a noob so bear with me:
I have a pretty basic script to check the cpu temp of the rpi, and I need the output on a single line as a requirement for reporting to a messenger service with a webhook. The output should look something like "54.0°C,129.2°F". I know the switch to append to existing line with echo, -n, but because I am piping the Fahrenheit conversion to the bash calc (BC) I get a syntax error if i try to start that line with "echo -n etc."
I also realize that I don't really need to print the °C and °F, but i demand luxury!
Here's my script (which works fabulous if i don't try to cram it all on the same line):
#!/bin/bash
(
cpuTemp0=$(cat /sys/class/thermal/thermal_zone0/temp)
cpuTemp1=$(($cpuTemp0/1000))
cpuTemp2=$((cpuTemp0/100))
cpuTempM=$(($cpuTemp2 % cpuTemp1))
#date
#echo cpu temp in °C and °F
echo -n $cpuTemp1"."$cpuTempM
echo -n "°C,"
echo -n "$cpuTemp1 * 1.8 + 32"|bc
echo "°F"
) > /home/pi/bin/tlog
the error I receive there is:
(standard_in) 1: syntax error
So the question is this; how do I get the °F on the same line as the conversion formula without borking the |bc function? I am positive the |bc is the issue as the script runs fine if I remove it, but it doesn't do the math for me. =(
Any help appreciated, thanks.

As you have discovered, bc wants a properly terminated line. So why don't we just give it one?
We can rearrange your code to do all the computation first and then do a single echo at the end:
#!/bin/bash
cpuTemp0=$(cat /sys/class/thermal/thermal_zone0/temp)
cpuTemp1=$(($cpuTemp0/1000))
cpuTemp2=$((cpuTemp0/100))
cpuTempM=$(($cpuTemp2 % cpuTemp1))
tempF=$(echo "$cpuTemp1 * 1.8 + 32"|bc)
echo -n "${cpuTemp1}.${cpuTempM}°C,${tempF}°F" > /home/pi/bin/tlog

Related

Passing bc into a variable isn't working right for me in Bash

I think I'm going crazy, I have a different script that's working fine with the usual way of passing the output of bc into a variable but this one I just can't get working.
The relevant bits of code are this:
PERCENT_MARKUP=".5"
$percentonumber=$(bc -l <<<"$PERCENT_MARKUP/100")
echo "Percent to number = $percentonumber"
$numbertomultiply=$(bc -l <<<"$percentonumber + 1")
echo "Number to multiply = $numbertomultiply"
$MARKUP=$(bc -l <<<"$buyVal * $numbertomultiply")
#$MARKUP=$(bc -l <<<"$buyVal * (1+($PERCENT_MARKUP/100))")
echo "Markup = $MARKUP"
Originally, I only had the second to last line that's currently commented out but I broke it down to try and troubleshoot.
The error I'm getting when I'm trying to run it is:
./sell_zombie_brains: line 65: =.00500000000000000000: command not found
where the .0050000000000000 is replaced by the output of bc. I've even tried the following at multiple points in the file including right after #!/bin/bash with and without -l
$test=`bc -l <<<"1"`
echo "$test"
echo
$test=$(bc -l <<<"1")
echo "$test"
echo
$test=$(echo "1"|bc -l)
echo "$test"
echo
And each one outputs ./sell_zombie_brains: line 68: =1: command not found
I'm really at my wits end. I don't know what I'm missing here. Any insight as to why it's behaving this way is appreciated.
You can't assign variables with the sigil $ :
instead of
$percentonumber=$()
it's
percentonumber=$()
If you are a bash beginner, some good pointers to start learning :
https://stackoverflow.com/tags/bash/info
FAQ
Guide
Ref
bash hackers
quotes
Check your script
And avoid people recommendations saying to learn with tldp.org web site, the tldp bash guide -ABS in particular) is outdated, and in some cases just plain wrong. The BashGuide and the bash-hackers' wiki are far more reliable.

Bash Script IF statement not functioning

I am currently testing a simple dictionary attack using bash scripts. I have encoded my password "Snake" with sha256sum by simply typing the following command:
echo -n Snake | sha256sum
This produced the following:
aaa73ac7721342eac5212f15feb2d5f7631e28222d8b79ffa835def1b81ff620 *-
I then copy pasted the hashed string into the program, but the script is not doing what is intended to do. The script is (Note that I have created a test dictionary text file which only contains 6 lines):
echo "Enter:"
read value
cat dict.txt | while read line1
do
atax=$(echo -n "$line1" | sha256sum)
if [[ "$atax" == "$value" ]];
then
echo "Cracked: $line1"
exit 1
fi
echo "Trying: $line1"
done
Result:
Trying: Dog
Trying: Cat
Trying: Rabbit
Trying: Hamster
Trying: Goldfish
Trying: Snake
The code should display "Cracked: Snake" and terminate, when it compares the hashed string with the word "Snake". Where am I going wrong?
EDIT: The bug was indeed the DOS lines in my textfile. I made a unix file and the checksums matched. Thanks everyone.
One problem is that, as pakistanprogrammerclub points out, you're never initializing name (as opposed to line1).
Another problem is that sha256sum does not just print out the checksum, but also *- (meaning "I read the file from standard input in binary mode").
I'm not sure if there's a clean way to get just the checksum — probably there is, but I can't find it — but you can at least write something like this:
atax=$(echo -n "$name" | sha256sum | sed 's/ .*//')
(using sed to strip off everything from the space onwards).
couple issues - the variable name is not set anywhere - do you mean value? Also better form to use redirection instead of cat
while read ...; do ... done <dict.txt
Variables set by a while loop in a pipeline are not available in the parent shell not the other way around as I mistakenly said before - it's not an issue here though
Could be a cut n paste error - add an echo after the first read
echo "value \"$value\""
also after atax is set
echo "line1 \"$line1\" atax \"$atax\""

Averaging Shell Script giving errors

I have a shell script that should take the average of several data files and create a new file. Here is a copy of the script:
#! /bin/bash
cat *thist > tmp.dat
> aves.txt
nx=10
ss=5
for i in $(seq 1 $nx)
do
a=$i*2-1
export dummy=$(awk 'NR=='$a' {print $1}' tmp.dat)
awk '$1=='$dummy' {print $5}' tmp.dat > $dummy.dat
export ave=$(awk 'NR>='$ss' {sum+=$1 b++} END {print sum/b}' $dummy.dat)
echo $dummy $ave >> aves.txt
done
rm *.dat
After reading in 100 .thist files, this is what the output file looks like:
0 545.608
4e-07 290.349
8e-07 613.883
1.2e-06 295.655
1.6e-06 310.78
2e-06 305.01
2.4e-06 300.733
2.8e-06 308.319
3.2e-06 298.728
3.6e-06 311.961
I am getting an error on lines 1 and 3, as the numbers in the second column should be between 250 and 350. I can't figure out what I am doing wrong. I have checked all the individual data files and all of the second column numbers are between 250 and 350. I have also run this script reading in only 10 files, and it seems to work just fine. I'm sorry if this is a dumb question or if it's confusing, I'm pretty new to shell scripts. Thanks in advance for your help.
Sam, you do not post the actual errors, but it would appear your line-1 error is due to the space between #! and /bin/bash (remove it). Then to enable debugging output add set -x as line-2 (or run your script with bash -x scriptname, which will do the same thing.) Post the line and actual error that occurs.
Your line-3 error is likely due to no file matching the file glob *thist. If there are additional characters that follow thist in the filename, you will need *thist* (or *thist.txt if they all have .txt extensions).
You next line is more properly written as :> aves.txt (to truncate the file at 0).
Finally your arithmetic should be a=$((i * 2 - 1)) or not recommended, but you can use the old expr syntax a=$(expr $i \* 2 - 1) (note: you must escape the * with \*)

1394661620440271000/1000000: No such file or directory

I am running this command in my shell script to get time in milliseconds:
START=`$(date +%s%N)/1000000`
However I keep getting the error:
1394661620440271000/1000000: No such file or directory
I tried to change the code by adding brackets, extra dollar signs, but I keep getting different kinds of errors. How can I fix the code? Could anyone help with that?
assuming that you're using bash:
START=$(( $(date '+%s%N') / 1000000 ))
you can't just say / on the command line to divide numbers. (( ... )) does arithmetic evaluation in bash.
I think you want the following:
START=$(($(date +%s%N)/1000000))
You could also use plain string manipulation:
$ start=$(date '+%s%N')
$ echo $start
1394663274979099354
$ echo ${start:0:-6}
1394663274979
The printf statement can round a value to the nearest millisecond:
printf "%0.3f\n" $(date +%s.%N)

Performance profiling tools for shell scripts

I'm attempting to speed up a collection of scripts that invoke subshells and do all sorts of things. I was wonder if there are any tools available to time the execution of a shell script and its nested shells and report on which parts of the script are the most expensive.
For example, if I had a script like the following.
#!/bin/bash
echo "hello"
echo $(date)
echo "goodbye"
I would like to know how long each of the three lines took. time will only only give me total time for the script. bash -x is interesting but does not include timestamps or other timing information.
You can set PS4 to show the time and line number. Doing this doesn't require installing any utilities and works without redirecting stderr to stdout.
For this script:
#!/bin/bash -x
# Note the -x flag above, it is required for this to work
PS4='+ $(date "+%s.%N ($LINENO) ")'
for i in {0..2}
do
echo $i
done
sleep 1
echo done
The output looks like:
+ PS4='+ $(date "+%s.%N ($LINENO) ")'
+ 1291311776.108610290 (3) for i in '{0..2}'
+ 1291311776.120680354 (5) echo 0
0
+ 1291311776.133917546 (3) for i in '{0..2}'
+ 1291311776.146386339 (5) echo 1
1
+ 1291311776.158646585 (3) for i in '{0..2}'
+ 1291311776.171003138 (5) echo 2
2
+ 1291311776.183450114 (7) sleep 1
+ 1291311777.203053652 (8) echo done
done
This assumes GNU date, but you can change the output specification to anything you like or whatever matches the version of date that you use.
Note: If you have an existing script that you want to do this with without modifying it, you can do this:
PS4='+ $(date "+%s.%N ($LINENO) ")' bash -x scriptname
In the upcoming Bash 5, you will be able to save forking date (but you get microseconds instead of nanoseconds):
PS4='+ $EPOCHREALTIME ($LINENO) '
You could pipe the output of running under -x through to something that timestamps each line when it is received. For example, tai64n from djb's daemontools.
At a basic example,
sh -x slow.sh 2>&1 | tai64n | tai64nlocal
This conflates stdout and stderr but it does give everything a timestamp.
You'd have to then analyze the output to find expensive lines and correlate that back to your source.
You might also conceivably find using strace helpful. For example,
strace -f -ttt -T -o /tmp/analysis.txt slow.sh
This will produce a very detailed report, with lots of timing information in /tmp/analysis.txt, but at a per-system call level, which might be too detailed.
Sounds like you want to time each echo. If echo is all that you're doing this is easy
alias echo='time echo'
If you're running other command this obviously won't be sufficient.
Updated
See enable_profiler/disable_profiler in
https://github.com/vlovich/bashrc-wrangler/blob/master/bash.d/000-setup
which is what I use now. I haven't tested on all version of BASH & specifically but if you have the ts utility installed it works very well with low overhead.
Old
My preferred approach is below. Reason is that it supports OSX as well (which doesn't have high precision date) & runs even if you don't have bc installed.
#!/bin/bash
_profiler_check_precision() {
if [ -z "$PROFILE_HIGH_PRECISION" ]; then
#debug "Precision of timer is unknown"
if which bc > /dev/null 2>&1 && date '+%s.%N' | grep -vq '\.N$'; then
PROFILE_HIGH_PRECISION=y
else
PROFILE_HIGH_PRECISION=n
fi
fi
}
_profiler_ts() {
_profiler_check_precision
if [ "y" = "$PROFILE_HIGH_PRECISION" ]; then
date '+%s.%N'
else
date '+%s'
fi
}
profile_mark() {
_PROF_START="$(_profiler_ts)"
}
profile_elapsed() {
_profiler_check_precision
local NOW="$(_profiler_ts)"
local ELAPSED=
if [ "y" = "$PROFILE_HIGH_PRECISION" ]; then
ELAPSED="$(echo "scale=10; $NOW - $_PROF_START" | bc | sed 's/\(\.[0-9]\{0,3\}\)[0-9]*$/\1/')"
else
ELAPSED=$((NOW - _PROF_START))
fi
echo "$ELAPSED"
}
do_something() {
local _PROF_START
profile_mark
sleep 10
echo "Took $(profile_elapsed) seconds"
}
Here's a simple method that works on almost every Unix and needs no special software:
enable shell tracing, e.g. with set -x
pipe the output of the script through logger:
sh -x ./slow_script 2>&1 | logger
This will writes the output to syslog, which automatically adds a time stamp to every message. If you use Linux with journald, you can get high-precision time stamps using
journalctl -o short-monotonic _COMM=logger
Many traditional syslog daemons also offer high precision time stamps (milliseconds should be sufficient for shell scripts).
Here's an example from a script that I was just profiling in this manner:
[1940949.100362] bremer root[16404]: + zcat /boot/symvers-5.3.18-57-default.gz
[1940949.111138] bremer root[16404]: + '[' -e /var/tmp/weak-modules2.OmYvUn/symvers-5.3.18-57-default ']'
[1940949.111315] bremer root[16404]: + args=(-E $tmpdir/symvers-$krel)
[1940949.111484] bremer root[16404]: ++ /usr/sbin/depmod -b / -ae -E /var/tmp/weak-modules2.OmYvUn/symvers-5.3.18-57-default 5.3.18-57>
[1940952.455272] bremer root[16404]: + output=
[1940952.455738] bremer root[16404]: + status=0
where you can see that the "depmod" command is taking a lot of time.
Copied from here:
Since I've ended up here at least twice now, I implemented a solution:
https://github.com/walles/shellprof
It runs your script, transparently clocks all lines printed, and at the end prints a top 10 list of the lines that were on screen the longest:
~/s/shellprof (master|✔) $ ./shellprof ./testcase.sh
quick
slow
quick
Timings for printed lines:
1.01s: slow
0.00s: <<<PROGRAM START>>>
0.00s: quick
0.00s: quick
~/s/shellprof (master|✔) $
I'm not aware of any shell profiling tools.
Historically one just rewrites too-slow shell scripts in Perl, Python, Ruby, or even C.
A less drastic idea would be to use a faster shell than bash. Dash and ash are available for all Unix-style systems and are typically quite a bit smaller and faster.

Resources