tk_optionMenu error "can't read "a": no such variable" - user-interface

I am trying to execute a simple code
global a
eval tk_optionMenu .qt.oc a [list 1 2 4 8 16]
proc Run {} {
puts "$a"
}
I have a button that associated to Run proc , when i press pres on Run button
I receive the following error:
can't read "a": no such variable
can't read "a": no such variable
while executing
"puts "$a""
(procedure "Run" line 2)
invoked from within
"Run"
invoked from within
".top.run invoke"
("uplevel" body line 1)
invoked from within
"uplevel #0 [list $w invoke]"
(procedure "tk::ButtonUp" line 22)
invoked from within
"tk::ButtonUp .top.run"
(command bound to event)
any suggestions?

global must be used inside the scope where you are trying to access a global variable. For example:
proc Run {} {
global a
puts "$a"
}
Here's an excerpt from the global man page:
This command has no effect unless executed in the context of a proc
body.

Related

Tcl passing value of a variable

I have a procedure which reads a register from a specified address:
"rd_addr $jtag_master 0x00"
I'd like to remove the "$jtag_master" input, and instead use a global variable declared at the beginning of the script, which I can then use in other procedures. The initial declaration is currently implemented through use of another procedure, "set_dev".
global device_path
# Establish the device_path as specified by "device"
proc set_dev {device} {
set jtag_master [lindex [get_service_paths master] $device];
set device_path $jtag_master
}
proc rd_addr {address {size 1}} {
set read_data [master_read_32 $device_path $address $size];
puts "$read_data"
}
# Init
set_dev 0
puts "jtag: $jtag_master"; # Returns proper path
puts "device_path: $device_path"; # Returns proper path
rd_addr 0x00
My current implementation of rd_addr returns:
error: can't read "device_path": no such variable
I've also tried implementing defaults, attempting defaults, which result in their own errors:
proc rd_addr {address {size 1} {device_path $device_path}} {
-> error: master_read_32: Error: The given path is not valid - $device_path
proc rd_addr { {device_path $device_path} address {size 1} } {
-> error: wrong # args: should be "rd_addr ?device_path? address ?size?"
Thanks in advance!
From the global documentation:
This command has no effect unless executed in the context of a proc body. If the global command is executed in the context of a proc body, it creates local variables linked to the corresponding global variables
Basically, global variable_name needs to be used inside each proc that wants to refer to the global variable of that name.
proc set_dev {device} {
global device_path
set jtag_master [lindex [get_service_paths master] $device];
set device_path $jtag_master
}
proc rd_addr {address {size 1}} {
global device_path
set read_data [master_read_32 $device_path $address $size];
puts "$read_data"
}
Without it, set_dev sets a variable local to the proc with that name, and as you've seen rd_addr can't find any variable with that name. Global variables in tcl are not visible at proc scope without this or a few other approaches (A fully qualified $::device_path for example)

How can I run all functions in a bash script, in the order in which they are declared without explicitly calling the function? [duplicate]

This question already has answers here:
Get a list of function names in a shell script [duplicate]
(4 answers)
Closed 6 years ago.
Lets say I have a script that has a bunch of functions that act like test cases. So for example:
#!/bin/bash
...
function testAssertEqualsWithStrings () {
assertEquals "something" "something"
}
testAssertEqualsWithStrings <--- I dont want to do this
function testAssertEqualsWithIntegers () {
assertEquals 10 10
}
testAssertEqualsWithIntegers <--- I dont want to do this
function testAssertEqualsWithIntegers2 () {
assertEquals 5 $((10 - 5))
}
testAssertEqualsWithIntegers2 <--- I dont want to do this
function testAssertEqualsWithDoubles () {
assertEquals 5.5 5.5
}
testAssertEqualsWithDoubles <--- I dont want to do this
...
Is there a way I can call of these functions in order without having to actually use that explicit function call underneath each test case? The idea is that the user shouldnt have to manage calling the functions. Ideally, the test suite library would do it for them. I just need to know if this is even possible.
Edit: The reason why I dont use just the assert methods is so I can have meaningful output. My current setup allows me to have output such as this:
...
LineNo 14: Passed - testAssertEqualsWithStrings
LineNo 19: Passed - testAssertEqualsWithIntegers
LineNo 24: Passed - testAssertEqualsWithIntegers2
LineNo 29: Passed - testAssertEqualsWithDoubles
LineNo 34: Passed - testAssertEqualsWithDoubles2
...
LineNo 103: testAssertEqualsWithStringsFailure: assertEquals() failed. Expected "something", but got "something else".
LineNo 108: testAssertEqualsWithIntegersFailure: assertEquals() failed. Expected "5", but got "10".
LineNo 115: testAssertEqualsWithArraysFailure: assertEquals() failed. Expected "1,2,3", but got "4,5,6".
LineNo 120: testAssertNotSameWithIntegersFailure: assertNotSame() failed. Expected not "5", but got "5".
LineNo 125: testAssertNotSameWithStringsFailure: assertNotSame() failed. Expected not "same", but got "same".
...
EDIT: Solution that worked for me
function runUnitTests () {
testNames=$(grep "^function" $0 | awk '{print $2}')
testNamesArray=($testNames)
beginUnitTests #prints some pretty output
for testCase in "${testNamesArray[#]}"
do
:
eval $testCase
done
endUnitTests #prints some more pretty output
}
Then I call runUnitTests at the bottom of my test suite.
If you just want to run these functions without calling them, why declare them as functions in the first place?
You either need them to be functions because you are using them multiple times and need the calls to be different each time, in which case I don't see an issue.
Otherwise you just want to run each function once without calling them, which is just running the command. Remove all the function declarations and just call each line.
For this example
#!/bin/bash
...
assertEquals "something" "something" #testAssertEqualsWithStrings
assertEquals 10 10 #testAssertEqualsWithIntegers
assertEquals 5 $((10 - 5)) #testAssertEqualsWithIntegers2
assertEquals 5.5 5.5 #testAssertEqualsWithDoubles
...

How to read standard input from a Bash heredoc within a Groovy script

I'm trying to make a Groovy script read standard input, so I can call it from a Bash script with a heredoc, but I get a java.lang.NullPointerException: Cannot invoke method readLine() on null object exception.
Here's a cut-down Groovy script echo.groovy:
#!/usr/bin/env groovy
for (;;)
{
String line = System.console().readLine()
if (line == null)
break
println(">>> $line")
}
Here's the equivalent Ruby script echo.rb:
#!/usr/bin/env ruby
ARGF.each do |line|
puts ">>> #{line}"
end
If I call these from a Bash shell, everything works as expected:
$ ./echo.rb
one
>>> one
two
>>> two
three
>>> three
^C
$ ./echo.groovy
one
>>> one
two
>>> two
three
>>> three
^C
This is the Bash script heredoc.sh using heredocs:
echo 'Calling echo.rb'
./echo.rb <<EOF
one
two
three
EOF
echo 'Calling echo.groovy'
./echo.groovy <<EOF
one
two
three
EOF
This is what happens when I run it:
$ ./heredoc.sh
Calling echo.rb
>>> one
>>> two
>>> three
Calling echo.groovy
Caught: java.lang.NullPointerException: Cannot invoke method readLine() on null object
java.lang.NullPointerException: Cannot invoke method readLine() on null object
at echo.run(echo.groovy:4)
Any ideas?
UPDATE
On Etan's advice, I changed echo.groovy to the following:
#!/usr/bin/env groovy
Reader reader = new BufferedReader(new InputStreamReader(System.in))
for (;;)
{
String line = reader.readLine()
if (line == null)
break
println(">>> $line")
}
It now works with heredocs:
$ ./heredoc.sh
Calling echo.rb
>>> one
>>> two
>>> three
Calling echo.groovy
>>> one
>>> two
>>> three
Thanks Etan. If you'd like to post a formal answer, I'll upvote it.
As an alternative to Etan's answer, a Groovier approach is the withReader method, which handles the cleanup of the reader afterwards, and the BufferedReader's eachLine method, which handles the infinite looping.
#!/usr/bin/env groovy
System.in.withReader { console ->
console.eachLine { line ->
println ">>> $line"
}
}
As Etan says, you need to read from System.in I think this will get the response you are after
#!/usr/bin/env groovy
System.in.withReader { r ->
r.eachLine { line ->
println ">>> $line"
}
}
Thought it's not exactly the same as the Ruby version, as ARGF will return arguments if any were passed

dynamic usage of attribute in recipe

I am trying to increment the value and use in another resource dynamically in recipe but still failing to do that.
Chef::Log.info("I am in #{cookbook_name}::#{recipe_name} and current disk count #{node[:oracle][:asm][:test]}")
bash "beforeTest" do
code lazy{ echo #{node[:oracle][:asm][:test]} }
end
ruby_block "test current disk count" do
block do
node.set[:oracle][:asm][:test] = "#{node[:oracle][:asm][:test]}".to_i+1
end
end
bash "test" do
code lazy{ echo #{node[:oracle][:asm][:test]} }
end
However I'm still getting the error bellow:
NoMethodError ------------- undefined method echo' for Chef::Resource::Bash
Cookbook Trace: ---------------
/var/chef/cache/cookbooks/Oracle11G/recipes/testSplit.rb:3:in block (2 levels) in from_file'
Resource Declaration: ---------------------
# In /var/chef/cache/cookbooks/Oracle11G/recipes/testSplit.rb
1: bash "beforeTest" do
2: code lazy{
3: echo "#{node[:oracle][:asm][:test]}"
4: }
5: end
Please can you help how lazy should be used in bash? If not lazy is there any other option?
bash "beforeTest" do
code lazy { "echo #{node[:oracle][:asm][:test]}" }
end
You should quote the command for the interpolation to work; if not, ruby would search for an echo command, which is unknown in ruby context (thus the error you get in log).
Warning: lazy has to be for the whole resource attribute; something like this WON'T work:
bash "beforeTest" do
code "echo node asm test is: #{lazy { node[:oracle][:asm][:test]} }"
end
The lazy evaluation takes a block of ruby code, as decribed here
You may have a better result with the log resource like this:
log "print before" do
message lazy { "node asm test is #{node[:oracle][:asm][:test]}" }
end
I've been drilling my head solving this problem until I came up with lambda expressions. But yet just using lambda didn't help me at all. So I thought of using both lambda and lazy evaluation. Though lambda is already lazy loading, when compiling chef recipe's, the resource where you call the lambda expression is still being evaluated. So to prevent it to being evaluated (somehow), I've put it inside a lazy evaluation string.
The lambda expression
app_version = lambda{`cat version`}
then the resource block
file 'tmp/validate.version' do
user 'user'
group 'user_group'
content lazy { app_version.call }
mode '0755'
end
Hope this can help others too :) or if you have some better solution please do let me know :)

command line arguments in bash to Rscript

I have a bash script that creates a csv file and an R file that creates graphs from that.
At the end of the bash script I call Rscript Graphs.R 10
The response I get is as follows:
Error in is.vector(X) : subscript out of bounds
Calls: print ... <Anonymous> -> lapply -> FUN -> lapply -> is.vector
Execution halted
The first few lines of my Graphs.R are:
#!/bin/Rscript
args <- commandArgs(TRUE)
CorrAns = args[1]
No idea what I am doing wrong? The advice on the net appears to me to say that this should work. Its very hard to make sense of commandArgs
With the following in args.R
print(commandArgs(TRUE)[1])
and the following in args.sh
Rscript args.R 10
I get the following output from bash args.sh
[1] "10"
and no error. If necessary, convert to a numberic type using as.numeric(commandArgs(TRUE)[1]).
Just a guess, perhaps you need to convert CorrAns from character to numeric, since Value section of ?CommandArgs says:
A character vector containing the name
of the executable and the
user-supplied command line arguments.
UPDATE: It could be as easy as:
#!/bin/Rscript
args <- commandArgs(TRUE)
(CorrAns = args[1])
(CorrAns = as.numeric(args[1]))
Reading the docs, it seems you might need to remove the TRUE from the call to commandArgs() as you don't call the script with --args. Either that, or you need to call Rscript Graphs.R --args 10.
Usage
commandArgs(trailingOnly = FALSE)
Arguments
trailingOnly logical. Should only
arguments after --args be returned?
Rscript args.R 10 where 10 is the numeric value we want to pass to the R script.
print(as.numeric(commandArgs(TRUE)[1]) prints out the value which can then be assigned to a variable.

Resources