For examples, there are many LC3 assembly programs for a right shift, does it make sense to evaluate them based on how fast the programs would run? is there a way to embed an assembly program in a scripting language such as python to measure its speed?
It can be measured indirectly via running a script:
1: create a script for lc3sim denoted as: lc3_run.txt
#cat lc3_run.txt
file test.obj
c
#
2. create a shell script denoted as: perf_test.sh
#cat perf_test.sh
set -x
n=0
while [ $n -lt 1000 ]
do
lc3sim -s lc3_run.txt
((n+=1))
echo $n
done
#chmod +x perf_test.sh
#time ./perf_test.sh
real 0m14.512s
user 0m3.552s
sys 0m3.304s
#3. compare the run time to evaluate which one is faster under
same conditions (HW/SW)
For related reference: <a href="https://softwareengineering.stackexchange.com/questions/357146/how-to-evaluate-efficiency-of-assembler-code "/> efficiency </a>
Related
I am trying to have
I have two registers reg_a and reg_b, each are 32 bit. reg_a is used to store the epoch time (unix time), so it can go upto a maximum of 2^32 -1. If an overflow occurs, the overflow should be stored in reg_b. I also want to write them in a file rom.txt . I am trying to do this in a Makefile. This is how far I got (It is more of a pseudocode, there are syntax errors). Would be happy to know if there is a better way to do this.
# should be 2^32-1, but lets consider the below for example
EPOCH_MAX = 1500000000
identifier:
# get the epoch value
epoch=$(shell date +%s)
# initialize the reg_a, reg_b, assuming that overflow has not occurred
reg_a=$(epoch)
reg_b=0
# if overflow occurs
if [ ${epoch} -gt $(EPOCH_MAX)] ; then \
reg_a=$(EPOCH_MAX) ;\
reg_b=$(shell $(epoch)\-$(EPOCH_MAX) | bc) ;\
fi ;\
# here I want to print the values in a text file
echo $$(reg_a) > rom.txt
echo $$(reg_b) >> rom.txt
I am novice to Makefiles. The above is just a sort of pseudocode which tells what I want to do (Mostly through reading some webpages). I will be happy if someone can help me with the above. Thanks.
you've been asking a lot of questions about make. I think you might benefit from spending some time reading the GNU make manual
Pertinent to your question, each logical line in a recipe is run in a separate shell. So, you cannot set a shell variable in one logical line, then use the results in another one.
A "logical line" is all the physical lines where the previous one ends in backslash/newline.
So:
identifier:
# get the epoch value
epoch=$(shell date +%s)
# initialize the reg_a, reg_b, assuming that overflow has not occurred
reg_a=$(epoch)
reg_b=0
Will run 5 separate shells, one for each line (including the comments! Every line indented with a TAB character is considered a recipe line, even ones that begin with comments).
On the other hand, this:
if [ ${epoch} -gt $(EPOCH_MAX)] ; then \
reg_a=$(EPOCH_MAX) ;\
reg_b=$(shell $(epoch)\-$(EPOCH_MAX) | bc) ;\
fi ;\
Runs the entire if-statement in a single shell, because the backslash/newline pairs create a single logical line.
Second, you have to keep very clear in your mind the difference between make variables and shell variables. In the above the line epoch=$(shell date +%s) is setting the shell variable epoch (which value is immediately lost again when the shell exits).
The line reg_a=$(epoch) is referencing the make variable epoch, which is not set and so is empty.
I need to process a big number of data file with gnuplot in order to produce images which are collected in a movie. As the procedure is time consuming I would like to produce the frames in parallel and a small message should be printed from time to time to inform the user of the progress.
I tried the makefile approach:
SOURCES=$(wildcard ./*.in)
OBJECTS=$(SOURCES:.in=.out)
all: $(OBJECTS)
%.out: %.in
./worker.sh $< $#
where worker.sh is:
gnuplot << EOF
set some_gnuplot_options
set output "$2"
plot "$1"
EOF
But:
I cannot print the progress messages,
I would prefer a single file solution (I have not succeeded in having the content of worker.sh directly in the makefile),
This solution introduces pretty much overhead with respect to a single gnuplot script wich contains all the instructions.
Probably the definitive solution would be to have a nice c++ interface to gnuplot, but I don't know very well the existing ones and I'm not sure how to do the job. Any other idea? Please avoid to imply new or not so common programs like GNU parallel as I cannot have them on some machines I use.
From your comment it sounds as if you are allowed to use your own scripts. GNU Parallel can be used as a script and does not need to be installed, and you can then create a file parallel_plotter:
#!/home/tange/bin/parallel --shebang-wrap -v A={} /usr/bin/gnuplot
name=system("echo $A")
set term png
set output name.".png"
plot sin(x*name)/x
Substitute /home/tange/bin/parallel with the full path to where you put the script parallel.
Then:
chmod 755 parallel_plotter
./parallel_plotter 1 2 3 4 5
This will print a line for each completed run.
To avoid the full path to /home/tange/bin/parallel I can come up with this solution:
#!/usr/bin/env gnuplot
name=system("echo $A")
set term png
set output name.".png"
plot sin(x*name)/x
Then:
chmod 755 parallel_plotter
parallel -v A={} ./parallel_plotter ::: 1 2 3 4 5
You are worried that spawning gnuplot will give a lot of overhead. I tested the above with:
./parallel_plotter {1..1000}
That took 10 secs. So the overhead of starting gnuplot on my system is less than 100 ms per job.
I would like to write a script to execute the steps outlined below. If someone can provide simple examples on how to modify files and search through folders using a script (not necessarily solving my problem below), I will greatly appreciate it.
submit job MyJob in currentDirectory using myJobShellFile.sh to a queue
upon completion of MyJob, goto to currentDirectory/myJobDataFolder.
In myJobDataFolder, there are folders
myJobData.0000 myJobData.0001 myJobData.0002 myJobData.0003
I want to find the maximum number maxIteration of all the listed folders. Here it would be maxIteration=0003.\
In file myJobShellFile.sh, at the last line says
mpiexec ./main input myJobDataFolder
I want to append this line to
'mpiexec ./main input myJobDataFolder 0003'
I want to submit MyJob to the que while maxIteration < 10
Upon completion of MyJob, find the new maxIteration and change this number in myJobShellFile.sh and goto step 4.
I think people write python scripts typically to do this stuff, but am having a hard time finding out how. I probably don't know the correct terminology for this procedure. I am also aware that the script will vary slightly depending on the queing system, but any help will be greatly appreciated.
Quite a few aspects of your question are unclear, such as the meaning of “submit job MyJob in currentDirectory using myJobShellFile.sh to a que”, “append this line to
'mpiexec ./main input myJobDataFolder 0003'”, how you detect when a job is done, relevant parts of myJobShellFile.sh, and some other details. If you can list the specific shell commands you use in each iteration of job submission, then you can post a better question, with a bash tag instead of python.
In the following script, I put a ### at the end of any line where I am guessing what you are talking about. Lines ending with ### may be irrelevant to whatever you actually do, or may be pseudocode. Anyway, the general idea is that the script is supposed to do the things you listed in your items 1 to 5. This script assumes that you have modified myJobShellFile.sh to say
mpiexec ./main input $1 $2
instead of
mpiexec ./main input
because it is simpler to use parameters to modify what you tell mpiexec than it is to keep modifying a shell script. Also, it seems to me you would want to increment maxIter before submitting next job, instead of after. If so, remove the # from the t=$((1$maxIter+1)); maxIter=${t#1} line. Note, see the “Parameter Expansion” section of man bash re expansion of the ${var#txt} form, and the “Arithmetic Expansion” section re $((expression)) form. The 1$maxIter and similar forms are used to change text like 0018 (which is not a valid bash number because 8 is not an octal digit) to 10018.
#!/bin/sh
./myJobShellFile.sh MyJob ###
maxIter=0
while true; do
waitforjobcompletion ###
cd ./myJobDataFolder
maxFile= $(ls myJobData* | tail -1)
maxIter= ${maxFile#myJobData.} #Get max extension
# If you want to increment maxIter, uncomment next line
# t=$((1$maxIter+1)); maxIter=${t#1}
cd ..
if [[ 1$maxIter -lt 11000 ]] ; then
./myJobShellFile.sh MyJobDataFolder $maxIter
else
break
fi
done
Notes: (1) To test with smaller runs than 1000 submissions, replace 11000 by 10000+n; for example, to do 123 runs, replace it with 10123. (2) In writing the above script, I assumed that not-previously-known numbers of output files appear in the output directory from time to time. If instead exactly one output file appears per run, and you just want to do one run per value for the values 0000, 0001, 0002, 0999, 1000, then use a script like the following. (For testing with a smaller number than 1000, replace 1000 with (eg) 0020. The leading zeroes in these numbers tell bash to fill the generated numbers with leading zeroes.)
#!/bin/sh
for iter in {0000..1000}; do
./myJobShellFile.sh MyJobDataFolder $iter
waitforjobcompletion ###
done
(3) If the system has a command that sleeps while it waits for a job to complete on the supercomputing resource, it is reasonable to use that command in place of waitforjobcompletion in the above scripts. Otherwise, if the system has a command jobisrunning that returns true if a job is still running, replace waitforjobcompletion with something like the following:
while jobisrunning ; do sleep 15; done
This will run the jobisrunning command; if it returns true, the shell will sleep for 15 seconds and then retest. Here is an example that illustrates waiting for a file to appear and then for it to go away:
while [ ! -f abc ]; do sleep 3; echo no abc; done
while ls abc >/dev/null 2>&1; do sleep 3; echo an abc; done
The second line's test could be [ -f abc ] instead; I showed a longer example to illustrate how to suppress output and error messages by routing them to /dev/null. (4) To reverse the sense of a while statement's test, replace the word while with until. For example, while [ ! -f abc ]; ... is equivalent to until [ -f abc ]; ....
I found partial solutions on several sites, so I pulled several parts together, but I still couldn't figure it out.
Here is what I am doing:
I am running a simple java program from Terminal, and need to find the average runtime for the program.
What I am doing is running the command several times, finding the total time, and then dividing that total time by the number of times I ran the program.
I would also like to acquire the output of the program rather than displaying it on standard output.
Here is my current code and the output.
Shell Script:
startTime=$(date +%s%N)
for ((i = 0; i < $runTimes; i++))
do
java Program test.txt > /dev/null
done
endTime=$(date +%s%N)
timeDiff=$(( $endTime - $startTime ))
timeAvg=$(( $timeDiff / $numTimes ))
echo "Avg Time Taken: "
echo $timeAvg
Output:
./run: line 12: 1305249784N: value too great for base (error token is "1305249784N")
The line number 12 is off because this code is part of a larger file.
The line number 12 is the line with timeDiff being evaluated.
I appreciate any help, and apologize if this question is redundant or off-topic.
On my machine, I don't see what the %N format for date is getting you, as the value seems to be 7 zeros, BUT it is making a much bigger number to evaluate in the math, i.e. 1305250833570000000. Do you really need nano-second precision? I'll bet if you go with just %s it will be fine.
Otherwise you look to be on the right track.
P.S.
Oh yeah, minor point,
echo "Avg Time Taken: $timeAvg"
Is a a simpler way to achieve your required output ;-)
Option 2. You could take out the date calculations all together, and turn your loop into a small script. Then you can use a built-in feature of the shell
time myJavaTest.sh
Will give you details like
real 0m0.049s
user 0m0.016s
sys 0m0.015s
I hope this helps.
When writing more than a trivial script in bash, I often wonder how to make the code testable.
It is typically hard to write tests for bash code, due to the fact that it is low on functions that take a value and return a value, and high on functions that check and set some aspect in the environment, modify the file-system, invoke a program, etc. - functions that depend on the environment or have side effects. Thus the setup and test code become much more complicated than the code they test.
For example, consider a simple function to test:
function add_to_file() {
local f=$1
cat >> $f
sort -u $f -o $f
}
Test code for this function might consist of:
add_to_file.before:
foo
bar
baz
add_to_file.after:
bar
baz
foo
qux
And test code:
function test_add_to_file() {
cp add_to_file.{before,tmp}
add_to_file add_to_file.tmp
cmp add_to_file.{tmp,after} && echo pass || echo fail
rm add_to_file.tmp
}
Here 5 lines of code are tested by 6 lines of test code and 7 lines of data.
Now consider a slightly more complicated case:
function distribute() {
local file=$1 ; shift
local hosts=( "$#" )
for host in "${hosts[#]}" ; do
rsync -ae ssh $file $host:$file
done
}
I can't even say how to start write a test for that...
So, is there a good way to do TDD in bash scripts, or should I give up and put my efforts elsewhere?
So here is what I learned:
There are some testing frameworks written in bash and for bash,
however...
It is not so much that Bash is not suitable for TDD (although some
other languages come to mind that are a better fit), but the
typical tasks that Bash is used for (Installation, System
configuration), that are hard to write tests for, and in
particularly hard to setup the test.
The poor data structure support in Bash makes it hard to separate
logic from side-effect, and indeed there is typically little logic
in Bash scripts. That makes it hard to break scripts into
testable chunks. There are some functions that can be tested, but
that is the exception, not the rule.
Function are a good thing (tm), but they can only go so far.
Nested functions can be even better, but they are also limited.
At the end of the day, with major effort some coverage can be
obtained, but it will test the less interesting part of the code,
and will keep the bulk of the testing as a good (or bad) old manual
testing.
Meta: I decided to answer (and accept) my own question, because I was unable to choose between Sinan Ünür's (voted up) and mouviciel's (voted up) answers that where equally useful and insightful. I want to note Stefano Borini's answer, that although not impressed me initially, I learned to appreciate it over time. Also his design patterns or best practices for shell scripts answer (voted up) referred above was useful.
If you are writing code at the same time with tests, try to make it high on functions that don't use anything besides their parameters and don't modify environment. That is, if your function might as well run in a subshell, then it will be easy to test. It takes some arguments and outputs something to stdout, or to a file, or maybe it does something on the system, but caller does not feel side effects.
Yes, you will end up with big chain of functions passing down some WORKING_DIR variable that might as well be global, but this is minor inconvenience comparing to the task of tracking what does each function read and modify. Enabling unit tests is just a free bonus too.
Try to minimize cases where you need output. A little subshell abuse will go long way to keeping things nicely separated (at the expense of performance).
Instead of linear structure, where functions are called, set some environment, then other ones are called, all pretty much on one level, try to go for deep call tree with minimum data going back. Returning stuff in bash is inconvenient if you adopt self-imposed abstinence from global vars...
From an implementation point of view, I suggest shUnit2 or bats.
From a practical point of view, I suggest not to give up. I use TDD on bash scripts and I confirm that it is worth the effort.
Of course, I get about twice as many lines of test than of code but with complex scripts, efforts in testing are a good investment. This is true in particular when your client changes its mind near the end of the project and modifies some requirements. Having a regression test suite is a big aid in changing complex bash code.
If you code a bash program large enough to require TDD, you are using the wrong language.
I suggest you to read my previous post on best practices in bash programming, you will probably find something useful to make your bash program testable, but my statement above stays.
Design patterns or best practices for shell scripts
Writing what Meszaros calls consumer tests is hard in any language. Another approach is to verify the behavior of commands such as rsync manually, then write unit tests to prove specific functionality without hitting the network. In this slightly-modified example, $run is used to print the side-effects if the script is run with the keyword "test"
function distribute {
local file=$1 ; shift
for host in $# ; do
$run rsync -ae ssh $file $host:$file
done
}
if [[ $1 == "test" ]]; then
run="echo"
else
distribute schedule.txt $*
exit 0
fi
#
# Built-in self-tests
#
output=$(mktemp)
expected=$(mktemp)
set -e
trap "rm $got $expected" EXIT
distribute schedule.txt login1 login2 > $output
cat << EOF > $expected
rsync -ae ssh schedule.txt login1:schedule.txt
rsync -ae ssh schedule.txt login2:schedule.txt
EOF
diff $output $expected
echo -n '.'
echo; echo "PASS"
You might want to take a look at cucumber/aruba. Did quite a nice job for me.
Additionally, you can stub just about everything you want by doing something like this:
#
# code.sh
#
some_function_calling_some_external_binary()
{
if ! external_binary action_1; then
# ...
fi
if ! external_binary action_2; then
# ...
fi
}
#
# test.sh
#
# now for the test, simply stub your external binary:
external_binary()
{
if [ "$#" = "action_1" ]; then
# stub action_1
elif [ "$#" = "action_2" ]; then
# stub action_2
else
external_binary $#
fi
}
The advanced bash scripting guide has an example of an assert function but here is a simpler and more flexible assert function - just use eval of $* to test any condition.
assert() {
if ! eval $* ; then
echo
echo "===== Assertion failed: \"$*\" ====="
echo "File \"$0\", line:$LINENO line:${BASH_LINENO[*]}"
echo line:$(caller 0)
exit 99
fi
}
# e.g. USAGE:
assert [[ $r == 42 ]]
assert "((r==42))"
BASH_LINENO and caller bash builtin are bash shell specific.
take a look at Outthentic framework - it is designed to create scenarios which runs any Bash code and then analyze the stdout using formal DSL, it's pretty easy to build any Tdd/blackbox tests suite upon this tool.