The simplified loop below exits at random times when I use the set -e option. If I remove the set -e option it always completes. I would like to use the set -e option if possible but so far I am at a loss as to why it is exiting and why it happens at random loop iterations each time I run it (try it!). As you can see the only commands are let and echo. Why would the let or echo commands return a non-zero code at random times, or is something else going on?
#!/bin/bash
# Do Release configuration builds so we can set the build parameters
set -e
CFG=Release
for CASE in {0..511}
do
# CASE [0...511] iterate
# MMMM [2...255] random test cases
# NNNN [1..MMMM) random test cases
# RRRR [0...255] random test cases
# XXXX [0...255] random test cases
# DSXX [1...128] random test cases
# OASM [1...255] random test cases
# OLSM [1...255] random test cases
let "MMMM = $RANDOM % 254 + 2"
let "NNNN = $RANDOM % ($MMMM - 1) + 1"
let "RRRR = $RANDOM % 256"
let "XXXX = $RANDOM % 256"
let "DSXX = $RANDOM % 128 + 1"
let "OASM = $RANDOM % 255 + 1"
let "OLSM = $RANDOM % 255 + 1"
echo CFG = $CFG, CASE = $CASE, MMMM = $MMMM, NNNN = $NNNN, RRRR = $RRRR, XXXX = $XXXX, DSXX = $DSXX, OASM = $OASM, and OLSM = $OLSM
# Some other stuff (build and test), that is not causing the problem, goes here
done
# Some other stuff, that is not causing the problem, goes here
exit 0
Append || true to your let commands or use $((...)) for calculations.
From help let:
Exit Status: If the last ARG evaluates to 0, let returns 1; let returns 0 otherwise.
Related
I have a CSV file like below:
E Run 1 Run 2 Run 3 Run 4 Run 5 Run 6 Mean
1 0.7019 0.6734 0.6599 0.6511 0.701 0.6977 0.680833333
2 0.6421 0.6478 0.6095 0.608 0.6525 0.6285 0.6314
3 0.6039 0.6096 0.563 0.5539 0.6218 0.5716 0.5873
4 0.5564 0.5545 0.5138 0.4962 0.5781 0.5154 0.535733333
5 0.5056 0.4972 0.4704 0.4488 0.5245 0.4694 0.485983333
I'm trying to use find the row number where the final column has a value below a certain range. For example, below 0.6.
Using the above CSV file, I want to return 3 because E = 3 is the first row where Mean <= 0.60. If there is no value below 0.6 I want to return 0. I am in effect returning the value in the first column based on the final column.
I plan to initialize this number as a constant in gnuplot. How can this be done? I've tagged awk because I think it's related.
In case you want a gnuplot-only version... if you use a file remove the datablock and replace $Data by your filename in " ".
Edit: You can do it without a dummy table, it can be done shorter with stats (check help stats). Even shorter than the accepted solution (well, we are not at code golf here), but additionally platform-independent because it's gnuplot-only.
Furthermore, in case E could be any number, i.e. 0 as well, then it might be better
to first assign E = NaN and then compare E to NaN (see here: gnuplot: How to compare to NaN?).
Script:
### conditional extraction into a variable
reset session
$Data <<EOD
E Run 1 Run 2 Run 3 Run 4 Run 5 Run 6 Mean
1 0.7019 0.6734 0.6599 0.6511 0.701 0.6977 0.680833333
2 0.6421 0.6478 0.6095 0.608 0.6525 0.6285 0.6314
3 0.6039 0.6096 0.563 0.5539 0.6218 0.5716 0.5873
4 0.5564 0.5545 0.5138 0.4962 0.5781 0.5154 0.535733333
5 0.5056 0.4972 0.4704 0.4488 0.5245 0.4694 0.485983333
EOD
E = NaN
stats $Data u ($8<=0.6 && E!=E? E=$1 : 0) nooutput
print E
### end of script
Result:
3.0
Actually, OP wants to return E=0 if the condition was not met. Then the script would be like this:
E=0
stats $Data u ($8<=0.6 && E==0? E=$1 : 0) nooutput
Another awk. You could initialize the default return value to var ret in BEGIN but since it's 0 there is really no point as empty var+0 produces the same effect. If the threshold value of 0.6 is not met before the ENDis reached, that is returned. If it is met, exit invokes the END and ret is output:
$ awk '
NR>1 && $NF<0.6 { # final column has a value below a certain range
ret=$1 # I want to return 3 because E = 3
exit
}
END {
print ret+0
}' file
Output:
3
Something like this should do the trick:
awk 'NR>1 && $8<.6 {print $1;fnd=1;exit}END{if(!fnd){print 0}}' yourfile
I have a TCL script that say, has 30 lines of automation code which I am executing in the dc shell (Synopsys Design Compiler). I want to stop and exit the script at line 10, exit the dc shell and bring it back up again after performing a manual review. However, this time, I want to run the script starting from line number 11, without having to execute the first 10 lines.
Instead of having two scripts, one which contains code till line number 10 and the other having the rest, I would like to make use of only one script and try to execute it from, let's say, line number N.
Something like:
source a.tcl -line 11
How can I do this?
If you have Tcl 8.6+ and if you consider re-modelling your script on top of a Tcl coroutine, you can realise this continuation behaviour in a few lines. This assumes that you run the script from an interactive Tcl shell (dc shell?).
# script.tcl
if {[info procs allSteps] eq ""} {
# We are not re-entering (continuing), so start all over.
proc allSteps {args} {
yield; # do not run when defining the coroutine;
puts 1
puts 2
puts 3
yield; # step out, once first sequence of steps (1-10) has been executed
puts 4
puts 5
puts 6
rename allSteps ""; # self-clean, once the remainder of steps (11-N) have run
}
coroutine nextSteps allSteps
}
nextSteps; # run coroutine
Pack your script into a proc body (allSteps).
Within the proc body: Place a yield to indicate the hold/ continuation point after your first steps (e.g., after the 10th step).
Create a coroutine nextSteps based on allSteps.
Protect the proc and coroutine definitions in a way that they do not cause a re-definition (when steps are pending)
Then, start your interactive shell and run source script.tcl:
% source script.tcl
1
2
3
Now, perform your manual review. Then, continue from within the same shell:
% source script.tcl
4
5
6
Note that you can run the overall 2-phased sequence any number of times (because of the self-cleanup of the coroutine proc: rename):
% source script.tcl
1
2
3
% source script.tcl
4
5
6
Again: All this assumes that you do not exit from the shell, and maintain your shell while performing your review. If you need to exit from the shell, for whatever reason (or you cannot run Tcl 8.6+), then Donal's suggestion is the way to go.
Update
If applicable in your case, you may improve the implementation by using an anonymous (lambda) proc. This simplifies the lifecycle management (avoiding re-definition, managing coroutine and proc, no need for a rename):
# script.tcl
if {[info commands nextSteps] eq ""} {
# We are not re-entering (continuing), so start all over.
coroutine nextSteps apply {args {
yield; # do not run when defining the coroutine;
puts 1
puts 2
puts 3
yield; # step out, once first sequence of steps (1-10) has been executed
puts 4
puts 5
puts 6
}}
}
nextSteps
The simplest way is to open the text file, parse it to get the first N commands (info complete is useful there), and then evaluate those (or the rest of the script). Doing this efficiently produces slightly different code when you're dropping the tail as opposed to when you're dropping the prefix.
proc ReadAllLines {filename} {
set f [open $filename]
set lines {}
# A little bit careful in case you're working with very large scripts
while {[gets $f line] >= 0} {
lappend lines $line
}
close $f
return $lines
}
proc SourceFirstN {filename n} {
set lines [ReadAllLines $filename]
set i 0
set script {}
foreach line $lines {
append script $line "\n"
if {[info complete $script] && [incr i] >= $n} {
break
}
}
info script $filename
unset lines
uplevel 1 $script
}
proc SourceTailN {filename n} {
set lines [ReadAllLines $filename]
set i 0
set script {}
for {set j 0} {$j < [llength $lines]} {incr j} {
set line [lindex $lines $j]
append script $line "\n"
if {[info complete $script]} {
if {[incr i] >= $n} {
info script $filename
set realScript [join [lrange $lines [incr j] end] "\n"]
unset lines script
return [uplevel 1 $realScript]
}
# Dump the prefix we don't need any more
set script {}
}
}
# If we get here, the script had fewer than n lines so there's nothing to do
}
Be aware that the kinds of files you're dealing with can get pretty large, and Tcl currently has some hard memory limits. On the other hand, if you can source the file at all, you're already within that limit…
I'm building a small monitoring solution and would like to understand what is the correct/best behavior in situation where previous reading is larger than current reading. For example ifHCOutOctets SNMP object counts bytes transmitted from an interface in Cisco router. How should the graphing application behave if this counter resets back to 0 for example because of router reboot? In my option following algorithm is the correct behavior:
if [ ! $prev_val ]; then
# This reading will be used to set the baseline value for "prev_val" variable
# if "prev_val" does not already exist.
prev_val="$cur_val"
elif (( prev_val > cur_val )); then
# Counter value has set to zero.
# Use the "cur_val" variable.
echo "$cur_val"
prev_val="$cur_val"
else
# In case "cur_val" is higher than or equal to "prev_val",
# use the "cur_val"-"prev_val"
echo $(( cur_val - prev_val ))
prev_val="$cur_val"
fi
I also made a small example graph based on the algorithm above:
Traffic graph was built based on this:
reading 1: cur_val=0, prev_val will be 0
reading 2: 0-0=0(0 Mbps), cur_val=0, prev_val will be 0
reading 3: 20-0=20(160 Mbps), cur_val=20, prev_val will be 20
reading 4: 20-20=0(0 Mbps), cur_val=20, prev_val will be 20
reading 5: 50-20=30(240 Mbps), cur_val=50, prev_val will be 50
reading 6: 40(320Mbps), cur_val=40, prev_val will be 40
reading 7: 70-40=30(240 Mbps), cur_val=70, prev_val will be 70
reading 8: no data from SNMP agent
reading 9: 90-70=20(160 Mbps), cur_val=90, prev_val will be 90
To me it looks like this small algorithm works correctly.
Please let me know if anything is unclear an I'll improve my question.
The problem I can see with what you are echoing is that in case of normal operation is the change of the counter. After router reboot, it will show some absolute value. There is now way to compare these 2. If you want to show the delta of the 2 reading I would suggest:
if [ ! $prev_val ]; then
# This reading will be used to set the baseline value for "prev_val" variable
# if "prev_val" does not already exist.
prev_val="$cur_val"
elif (( prev_val > cur_val )); then
# Counter value has set to zero.
# Use the "cur_val" variable.
echo "Router/counter restarted"
# restart the counter as well
prev_val="$cur_val"
else
# In case "cur_val" is higher than or equal to "prev_val",
# use the "cur_val"-"prev_val"
echo $((cur_val-prev_val))
fi
You can also remove elif part and just print negative value to indicate restart of the counter/router
Particularly if the sample type is 'Counter32', it's worthwhile to account for the counters rolling over. I'm not sure whether it's a "best practice", but when you know you have a partial sample, you can also extrapolate the fragment of data across your sample as though you sustained that same rate of increase across your full sample. Unless your data is very bursty, it should make for a smoother graph.
partial_calc = $((sample_time - ifCounterDiscontinuityTime));
if ("$interval" -gt "$partial_calc") {
sample = $((curr_val * interval / partial_check))
} elif "$curr_val" -gt "$prev_val" {
sample = $((curr_val - prev_val));
} else {
if ("$type" -eq "Counter32") {
sample = $((4294967295 - prev_val + curr_val));
} else {
sample = $curr_val;
}
}
for a in bar
do
for b in 1000000
do
montage -geometry 500 $a-$b-*-${0..20000..1000}.png \
$a-$b-${0..20000..1000}-final.jpg
done
done
I'm unable to get all the images with number 0 1000 2000 ... 20000 using $(0..20000.1000) .
Is there an other way in shell to do this?
There must be no $ before {START..END..STEP}
% echo -{0..20000..1000}-
-0- -1000- -2000- -3000- -4000- -5000- -6000- -7000- -8000- -9000- -10000- -11000- -12000- -13000- -14000- -15000- -16000- -17000- -18000- -19000- -20000-
That being said, you need a loop to go over these numbers. The word containing a range is just replaced by its expansion. That means the command line is not called for each element alone, but for all of them together. It also means, that, even if you are using the same range twice, their expansion will not conveniently be combined.
Compare
% echo start a-{1..3}-b A-{1..3}-B end
start a-1-b a-2-b a-3-b A-1-B A-2-B A-3-B end
and
% for n in {1..3}; do echo start a-$n-b A-$n-B end; done
start a-1-b A-1-B end
start a-2-b A-2-B end
start a-3-b A-3-B end
So in your example instead of
montage -geometry 500 $a-$b-*-${0..20000..1000}.png \
$a-$b-${0..20000..1000}-final.jpg
you probably want to do
for n in {0..20000..1000}; do
montage -geometry 500 $a-$b-*-$n.png $a-$b-$n-final.jpg
done
#!/usr/bin/env python
import os
file_names= ["a","b"]
ranges = list(xrange(0, 20000, 1000))
l_s = list()
for f in range(0, len(file_names)):
for a in range(0, len(ranges)):
stringing = str(file_names[f]) + "--1000000-0-0-0-1-" + str(ranges[a]) + ".log.png"
l_s.append(stringing)
final = " ".join(l_s)
foo = "montage -geometry 500 " + str(final) + " " + str(file_names[f])+ "-final.jpg"
os.system(foo)
I'm trying to implement a PID controller following http://en.wikipedia.org/wiki/PID_controller
The mechanism I try to control works as follows:
1. I have an input variable which I can control. Typical values would be 0.5...10.
2. I have an output value which I measure daily. My goal for the output is roughly at the same range.
The two variables have strong correlation - when the process parameter goes up, the output generally goes up, but there's quite a bit of noise.
I'm following the implementation here:
http://code.activestate.com/recipes/577231-discrete-pid-controller/
Now the PID seems like it is correlated with the error term, not the measured level of output. So my guess is that I am not supposed to use it as-is for the process variable, but rather as some correction to the current value? How is that supposed to work exactly?
For example, if we take Kp=1, Ki=Kd=0, The process (input) variable is 4, the current output level is 3 and my target is a value of 2, I get the following:
error = 2-3 = -1
PID = -1
Then I should set the process variable to -1? or 4-1=3?
You need to think in terms of the PID controller correcting a manipulated variable (MV) for errors, and that you need to use an I term to get to an on-target steady-state result. The I term is how the PID retains and applies memory of the prior behavior of the system.
If you are thinking in terms of the output of the controller being changes in the MV, it is more of a 'velocity form' PID, and the memory of prior errors and behavior is integrated and accumulated in the prior MV setting.
From your example, it seems like a manipulated value of -1 is not feasible and that you would like the controller to suggest a value like 3 to get a process output (PV) of 2. For a PID controller to make use of "The process (input) variable is 4,..." (MV in my terms) Ki must be non-zero, and if the system was at steady-state, whatever was accumulated in the integral (sum_e=sum(e)) would precisely equal 4/Ki, so:
Kp= Ki = 1 ; Kd =0
error = SV - PV = 2 - 3 = -1
sum_e = sum_e + error = 4/Ki -1
MV = PID = -1(Kp) + (4/Ki -1)Ki = -1Kp + 4 - 1*Ki = -1 +4 -1 = 2
If you used a slower Ki than 1, it would smooth out the noise more and not adjust the MV so quickly:
Ki = 0.1 ;
MV = PID = -1(Kp) + (4/Ki -1)Ki = -1Kp + 4 - 1*Ki = -1 +4 -0.1 = 2.9
At steady state at target (PV = SV), sum_e * Ki should produce the steady-state MV:
PV = SV
error = SV - PV = 0
Kp * error = 0
MV = 3 = PID = 0 * Kp + Ki * sum_e
A nice way to understand the PID controller is to put units on everything and think of Kp, Ki, Kd as conversions of the process error, accumulated error*timeUnit, and rate-of-change of error/timeUnit into terms of the manipulated variable, and that the controlled system converts the controller's manipulated variable into units of output.