How can I run all functions in a bash script, in the order in which they are declared without explicitly calling the function? [duplicate] - bash

This question already has answers here:
Get a list of function names in a shell script [duplicate]
(4 answers)
Closed 6 years ago.
Lets say I have a script that has a bunch of functions that act like test cases. So for example:
#!/bin/bash
...
function testAssertEqualsWithStrings () {
assertEquals "something" "something"
}
testAssertEqualsWithStrings <--- I dont want to do this
function testAssertEqualsWithIntegers () {
assertEquals 10 10
}
testAssertEqualsWithIntegers <--- I dont want to do this
function testAssertEqualsWithIntegers2 () {
assertEquals 5 $((10 - 5))
}
testAssertEqualsWithIntegers2 <--- I dont want to do this
function testAssertEqualsWithDoubles () {
assertEquals 5.5 5.5
}
testAssertEqualsWithDoubles <--- I dont want to do this
...
Is there a way I can call of these functions in order without having to actually use that explicit function call underneath each test case? The idea is that the user shouldnt have to manage calling the functions. Ideally, the test suite library would do it for them. I just need to know if this is even possible.
Edit: The reason why I dont use just the assert methods is so I can have meaningful output. My current setup allows me to have output such as this:
...
LineNo 14: Passed - testAssertEqualsWithStrings
LineNo 19: Passed - testAssertEqualsWithIntegers
LineNo 24: Passed - testAssertEqualsWithIntegers2
LineNo 29: Passed - testAssertEqualsWithDoubles
LineNo 34: Passed - testAssertEqualsWithDoubles2
...
LineNo 103: testAssertEqualsWithStringsFailure: assertEquals() failed. Expected "something", but got "something else".
LineNo 108: testAssertEqualsWithIntegersFailure: assertEquals() failed. Expected "5", but got "10".
LineNo 115: testAssertEqualsWithArraysFailure: assertEquals() failed. Expected "1,2,3", but got "4,5,6".
LineNo 120: testAssertNotSameWithIntegersFailure: assertNotSame() failed. Expected not "5", but got "5".
LineNo 125: testAssertNotSameWithStringsFailure: assertNotSame() failed. Expected not "same", but got "same".
...
EDIT: Solution that worked for me
function runUnitTests () {
testNames=$(grep "^function" $0 | awk '{print $2}')
testNamesArray=($testNames)
beginUnitTests #prints some pretty output
for testCase in "${testNamesArray[#]}"
do
:
eval $testCase
done
endUnitTests #prints some more pretty output
}
Then I call runUnitTests at the bottom of my test suite.

If you just want to run these functions without calling them, why declare them as functions in the first place?
You either need them to be functions because you are using them multiple times and need the calls to be different each time, in which case I don't see an issue.
Otherwise you just want to run each function once without calling them, which is just running the command. Remove all the function declarations and just call each line.
For this example
#!/bin/bash
...
assertEquals "something" "something" #testAssertEqualsWithStrings
assertEquals 10 10 #testAssertEqualsWithIntegers
assertEquals 5 $((10 - 5)) #testAssertEqualsWithIntegers2
assertEquals 5.5 5.5 #testAssertEqualsWithDoubles
...

Related

How to find the line causing error in Julia?

Suppose there is a script A that calls function B, both in Julia.
There are some errors in function B, which cause the script to be stopped at runtime.
Is there a neat way to find out which line is causing the error?
It does not make any sense, to have to put messages like println manually in each line to find out upto which line the code survives, and in which line error happens.
Edit: I am using Linux Red Hat 4.1.2 and Julia version 0.3.6. directly. With no IDE.
Reading the backtrace:
juser#juliabox:~$ cat foo.jl
# line 1 empty comment
foo() = error("This is line 2")
foo() # line 3
juser#juliabox:~$ julia foo.jl
ERROR: This is line 2
in foo at /home/juser/foo.jl:2
in include at ./boot.jl:245
in include_from_node1 at loading.jl:128
in process_options at ./client.jl:285
in _start at ./client.jl:354
while loading /home/juser/foo.jl, in expression starting on line 3
This lines in foo at /home/juser/foo.jl:2 ... while loading /home/juser/foo.jl, in expression starting on line 3 reads as: "there was an error at line 2 in /home/juser/foo.jl file ... while loading /home/juser/foo.jl, in expression starting on line 3"
Looks pretty clear to me!
Edit: /home/juser/foo.jl:2 means; file: /home/juser/foo.jl, line number: 2.
Also you could use #show macro instead of println function for debugging purposes:
julia> println(1 < 5 < 10)
true
julia> #show 1 < 5 < 10
(1<5<10) => true
true

How can I run a sub function in Bash

How can I run a 'sub function' from a script in command line? Example:
#script_1.sh
main_function() {
sub_function() {
echo "hello world"
}
}
I tried to source this file and call the function from another script:
#script_2.sh
source script_1.sh
sub_function
But I get
script_2.sh: line 3: sub_function: command not found
while I expected to just get hello world.
Thus defined the sub_function will be defined after function is called.
So:
#script_1.sh
function() {
sub_function() {
#cmd
}
}
#script_2.sh
source script_1.sh
function
sub_function
... should work ... except you should rename function, as it's a reserved word
The step missing in your question is to invoke function first - its action is to define sub_function.
Note that sub_function doesn't 'belong' to function in any way - its definition is just a side-effect of running function.
P.S. I assume you aren't really trying to call it function - that's a reserved word in bash.

dynamic usage of attribute in recipe

I am trying to increment the value and use in another resource dynamically in recipe but still failing to do that.
Chef::Log.info("I am in #{cookbook_name}::#{recipe_name} and current disk count #{node[:oracle][:asm][:test]}")
bash "beforeTest" do
code lazy{ echo #{node[:oracle][:asm][:test]} }
end
ruby_block "test current disk count" do
block do
node.set[:oracle][:asm][:test] = "#{node[:oracle][:asm][:test]}".to_i+1
end
end
bash "test" do
code lazy{ echo #{node[:oracle][:asm][:test]} }
end
However I'm still getting the error bellow:
NoMethodError ------------- undefined method echo' for Chef::Resource::Bash
Cookbook Trace: ---------------
/var/chef/cache/cookbooks/Oracle11G/recipes/testSplit.rb:3:in block (2 levels) in from_file'
Resource Declaration: ---------------------
# In /var/chef/cache/cookbooks/Oracle11G/recipes/testSplit.rb
1: bash "beforeTest" do
2: code lazy{
3: echo "#{node[:oracle][:asm][:test]}"
4: }
5: end
Please can you help how lazy should be used in bash? If not lazy is there any other option?
bash "beforeTest" do
code lazy { "echo #{node[:oracle][:asm][:test]}" }
end
You should quote the command for the interpolation to work; if not, ruby would search for an echo command, which is unknown in ruby context (thus the error you get in log).
Warning: lazy has to be for the whole resource attribute; something like this WON'T work:
bash "beforeTest" do
code "echo node asm test is: #{lazy { node[:oracle][:asm][:test]} }"
end
The lazy evaluation takes a block of ruby code, as decribed here
You may have a better result with the log resource like this:
log "print before" do
message lazy { "node asm test is #{node[:oracle][:asm][:test]}" }
end
I've been drilling my head solving this problem until I came up with lambda expressions. But yet just using lambda didn't help me at all. So I thought of using both lambda and lazy evaluation. Though lambda is already lazy loading, when compiling chef recipe's, the resource where you call the lambda expression is still being evaluated. So to prevent it to being evaluated (somehow), I've put it inside a lazy evaluation string.
The lambda expression
app_version = lambda{`cat version`}
then the resource block
file 'tmp/validate.version' do
user 'user'
group 'user_group'
content lazy { app_version.call }
mode '0755'
end
Hope this can help others too :) or if you have some better solution please do let me know :)

tk_optionMenu error "can't read "a": no such variable"

I am trying to execute a simple code
global a
eval tk_optionMenu .qt.oc a [list 1 2 4 8 16]
proc Run {} {
puts "$a"
}
I have a button that associated to Run proc , when i press pres on Run button
I receive the following error:
can't read "a": no such variable
can't read "a": no such variable
while executing
"puts "$a""
(procedure "Run" line 2)
invoked from within
"Run"
invoked from within
".top.run invoke"
("uplevel" body line 1)
invoked from within
"uplevel #0 [list $w invoke]"
(procedure "tk::ButtonUp" line 22)
invoked from within
"tk::ButtonUp .top.run"
(command bound to event)
any suggestions?
global must be used inside the scope where you are trying to access a global variable. For example:
proc Run {} {
global a
puts "$a"
}
Here's an excerpt from the global man page:
This command has no effect unless executed in the context of a proc
body.

command line arguments in bash to Rscript

I have a bash script that creates a csv file and an R file that creates graphs from that.
At the end of the bash script I call Rscript Graphs.R 10
The response I get is as follows:
Error in is.vector(X) : subscript out of bounds
Calls: print ... <Anonymous> -> lapply -> FUN -> lapply -> is.vector
Execution halted
The first few lines of my Graphs.R are:
#!/bin/Rscript
args <- commandArgs(TRUE)
CorrAns = args[1]
No idea what I am doing wrong? The advice on the net appears to me to say that this should work. Its very hard to make sense of commandArgs
With the following in args.R
print(commandArgs(TRUE)[1])
and the following in args.sh
Rscript args.R 10
I get the following output from bash args.sh
[1] "10"
and no error. If necessary, convert to a numberic type using as.numeric(commandArgs(TRUE)[1]).
Just a guess, perhaps you need to convert CorrAns from character to numeric, since Value section of ?CommandArgs says:
A character vector containing the name
of the executable and the
user-supplied command line arguments.
UPDATE: It could be as easy as:
#!/bin/Rscript
args <- commandArgs(TRUE)
(CorrAns = args[1])
(CorrAns = as.numeric(args[1]))
Reading the docs, it seems you might need to remove the TRUE from the call to commandArgs() as you don't call the script with --args. Either that, or you need to call Rscript Graphs.R --args 10.
Usage
commandArgs(trailingOnly = FALSE)
Arguments
trailingOnly logical. Should only
arguments after --args be returned?
Rscript args.R 10 where 10 is the numeric value we want to pass to the R script.
print(as.numeric(commandArgs(TRUE)[1]) prints out the value which can then be assigned to a variable.

Resources