I'm having difficulties trying to use environment variables in TCL.
So I have a shell script which exports a bunch of environment variables so that my TCL scripts can use them later on.
set_vars.sh has the following code in it:
export TEST_ENV_VAR=Test
and the following works:
% puts $env(TEST_ENV_VAR)
Test
BUT once I place the environment variable in a operator or in quotes, nothing is returned.
% puts "$env(TEST_ENV_VAR) is in TEST_ENV_VAR"
is in TEST_ENV_VAR
Anyone know what the issue could be? Any help would be greatly appreciated.
Thanks for your time.
There must be something you're not showing us:
$ export TEST_ENV_VAR=Test
$ tclsh
% puts $env(TEST_ENV_VAR)
Test
% puts "$env(TEST_ENV_VAR) is in TEST_ENV_VAR"
Test is in TEST_ENV_VAR
Note that env is a global variable, so if you're using it in a procedure, you must declare it as global
$ tclsh
% proc show_env1 {varname} {
puts "does not work: $env($varname)"
}
% show_env1 TEST_ENV_VAR
can't read "env(TEST_ENV_VAR)": no such variable
% proc show_env2 {varname} {
global env
puts "works: $env($varname)"
}
% show_env2 TEST_ENV_VAR
works: Test
% proc show_env3 {varname} {
puts "also works: $::env($varname)"
}
% show_env3 TEST_ENV_VAR
also works: Test
Related
I have a nextflow script that creates a variable from a text file, and I need to pass the value of that variable to a command line order (which is a bioconda package). Those two processes happen inside the "script" part. I have tried to call the variable using the '$' symbol without any results, I think because using that symbol in the script part of a nextflow script is for calling variables defined in the input part.
To make myself clearer, here is a code sample of what I'm trying to achieve:
params.gz_file = '/path/to/file.gz'
params.fa_file = '/path/to/file.fa'
params.output_dir = '/path/to/outdir'
input_file = file(params.gz_file)
fasta_file = file(params.fa_file)
process foo {
//publishDir "${params.output_dir}", mode: 'copy',
input:
path file from input_file
path fasta from fasta_file
output:
file ("*.html")
script:
"""
echo 123 > number.txt
parameter=`cat number.txt`
create_report $file $fasta --flanking $parameter
"""
}
By doig this the error I recieve is:
Error executing process > 'foo'
Caused by:
Unknown variable 'parameter' -- Make sure it is not misspelt and defined somewhere in the script before using it
Is there any way to call the variable parameter inside the script without Nextflow interpreting it as an input file? Thanks in advance!
The documentation re the script block is useful here:
Since Nextflow uses the same Bash syntax for variable substitutions in
strings, you need to manage them carefully depending on if you want to
evaluate a variable in the Nextflow context - or - in the Bash
environment execution.
One solution is to escape your shell (Bash) variables by prefixing them with a back-slash (\) character, like in the following example:
process foo {
script:
"""
echo 123 > number.txt
parameter="\$(cat number.txt)"
echo "\${parameter}"
"""
}
Another solution is to instead use a shell block, where dollar ($) variables are managed by your shell (Bash interpreter), while exclamation mark (!) variables are handled by Nextflow. For example:
process bar {
echo true
input:
val greeting from 'Hello', 'Hola', 'Bonjour'
shell:
'''
echo 123 > number.txt
parameter="$(cat number.txt)"
echo "!{greeting} parameter ${parameter}"
'''
}
declare "parameter" in the top 'params' section.
params.parameter="1234"
(..)
script:
"""
(...)
create_report $file $fasta --flanking ${params.parameter}
(...)
"""
(...)
and call "nextflow run" with "--parameter 87678"
Is it possible to define a function but use a variable to compose its name?
_internal_add_deployment_aliases(){
environment=$1
instanceNumber=$2
shortName=$3
instance="${environment}${instanceNumber}"
ipVar="_ip_$instance"
ip=${!ipVar}
# lots of useful aliases
_full_build_${instance}(){ # how do i dynamically define a function using a variable in its name
#something useful
}
}
Context: I'd like to add bunch of aliases and convenience functions to work with my cloud instances, defining aliases is not a problem, I can easily do
alias _ssh_into_${instance}="ssh -i \"${KEY}\" root#$ip"
and I want to have specific aliases defined when I source from this...
Now when i want to do the same for functions i have a problem, is it possible to do this?
Any help is very very much appreciated :)
You need to use eval for such a problem:
$ var=tmp
$ eval "function echo_${var} {
echo 'tmp'
}"
$ echo_tmp
tmp
An example of your script:
#! /bin/bash
_internal_add_deployment_aliases(){
environment=$1
instanceNumber=$2
shortName=$3
instance="${environment}${instanceNumber}"
ipVar="_ip_$instance"
ip=${!ipVar}
# lots of useful aliases
eval "_full_build_${instance}(){
echo _full_build_${instance}
}"
}
_internal_add_deployment_aliases "stackoverflow" 3 so
_full_build_stackoverflow3 # Will echo _full_build_stackoverflow3
exit 0
I want to call Ruby function from command line with array as an argument.
Script name is test.rb
In below code Environments are like test,dev,uat.', am passing as ['test','dev','uat']
I have tried as below:
ruby -r "./test.rb" -e "start_services '['dev','test','uat']','developer','welcome123'"
def start_services(environments,node_user_name,node_password)
environments.each do |env|
puts env
end
puts node_user_name
puts node_password
end
Output:
-e:1: syntax error, unexpected tIDENTIFIER, expecting end-of-input start_services '['dev','test','uat']','developer',' ^
You clearly want to pass an array as the first parameter into start_services method, so you should do it like this:
$ ruby -r "./test.rb" -e "start_services ['dev','test','uat'],'developer','welcome123'"
# output:
dev
test
uat
developer
welcome123
What you've been trying so far was attempt to pass '[\'dev\',\'test\',\'uat\']' string, which was malformed, because you didn't escape ' characters.
Don't pass your credentials as arguments, any user on your system would be able to see them.
Instead, you could save them as environment variables or in a config file.
if ARGV.size == 0
puts "Here's how to launch this script : ruby #{__FILE__} env_name1 env_name2 ..."
exit
end
# Define those environment variables before launching the script.
# Alternative : write credentials in a json or yml file.
node_username = ENV["NODE_USERNAME"]
node_password = ENV["NODE_PASSWORD"]
ARGV.each do |env|
puts "Launching environment #{env}"
end
I have a custom 'runner'-script that I need to use to run all of my terminal commands. Below you can see the general idea in the script.
#!/usr/bin/env bash
echo "Running '$#'"
# do stuff before running cmd
$#
echo "Done"
# do stuff after running cmd
I can use the script in bash as follows:
$ ./run.sh echo test
Running 'echo test'
test
Done
$
I would like to use it like this:
$ echo test
Running 'echo test'
test
Done
$
Bash has the trap ... DEBUG and PROMPT_COMMAND, which lets me execute something before and after a command, but is there something that would allow me to execute instead of the command?
There is also the command_not_found_handle which would work if I had an empty PATH env variable, but that seems too dirty.
After some digging, I ended up looking at the source code and found that bash does not support custom executors. Below is a patch to add a new handle that works similarly as the command_not_found_handler.
diff --git a/eval.c b/eval.c
index f02d6e40..8d32fafa 100644
--- a/eval.c
+++ b/eval.c
## -52,6 +52,10 ##
extern sigset_t top_level_mask;
#endif
+#ifndef EXEC_HOOK
+# define EXEC_HOOK "command_exec_handle"
+#endif
+
static void send_pwd_to_eterm __P((void));
static sighandler alrm_catcher __P((int));
## -172,7 +176,15 ## reader_loop ()
executing = 1;
stdin_redir = 0;
- execute_command (current_command);
+ SHELL_VAR *hookf = find_function (EXEC_HOOK);
+ if (hookf == 0) {
+ execute_command (current_command);
+ } else {
+ char *command_to_print = make_command_string (current_command);
+ WORD_LIST *og = make_word_list(make_word(command_to_print), (WORD_LIST *)NULL);
+ WORD_LIST *wl = make_word_list(make_word(EXEC_HOOK), og);
+ execute_shell_function (hookf, wl);
+ }
exec_done:
QUIT;
One can then define function command_exec_handle() { eval $1; } which will be executed instead of the original command given in the prompt. The original command is fully in the first parameter. The command_exec_handle can be given in .bashrc and it works as expected.
Notice: this is very dangerous! If you mess up and put a bad command_exec_handler in your .bashrc, you might end up with a shell that does not execute commands. It will be quite hard to fix without booting from a live cd.
It seems you have the same problem listed here. If you want to run some commands if your original command was not found, the Bash 4's command_not_found_handler will certainly fit your needs.
Try to be more specific, maybe with some code snippets that do or do not work, in order to help us to help you...
A recent vulnerability, CVE-2014-6271, in how Bash interprets environment variables was disclosed. The exploit relies on Bash parsing some environment variable declarations as function definitions, but then continuing to execute code following the definition:
$ x='() { echo i do nothing; }; echo vulnerable' bash -c ':'
vulnerable
But I don't get it. There's nothing I've been able to find in the Bash manual about interpreting environment variables as functions at all (except for inheriting functions, which is different). Indeed, a proper named function definition is just treated as a value:
$ x='y() { :; }' bash -c 'echo $x'
y() { :; }
But a corrupt one prints nothing:
$ x='() { :; }' bash -c 'echo $x'
$ # Nothing but newline
The corrupt function is unnamed, and so I can't just call it. Is this vulnerability a pure implementation bug, or is there an intended feature here, that I just can't see?
Update
Per Barmar's comment, I hypothesized the name of the function was the parameter name:
$ n='() { echo wat; }' bash -c 'n'
wat
Which I could swear I tried before, but I guess I didn't try hard enough. It's repeatable now. Here's a little more testing:
$ env n='() { echo wat; }; echo vuln' bash -c 'n'
vuln
wat
$ env n='() { echo wat; }; echo $1' bash -c 'n 2' 3 -- 4
wat
…so apparently the args are not set at the time the exploit executes.
Anyway, the basic answer to my question is, yes, this is how Bash implements inherited functions.
This seems like an implementation bug.
Apparently, the way exported functions work in bash is that they use specially-formatted environment variables. If you export a function:
f() { ... }
it defines an environment variable like:
f='() { ... }'
What's probably happening is that when the new shell sees an environment variable whose value begins with (), it prepends the variable name and executes the resulting string. The bug is that this includes executing anything after the function definition as well.
The fix described is apparently to parse the result to see if it's a valid function definition. If not, it prints the warning about the invalid function definition attempt.
This article confirms my explanation of the cause of the bug. It also goes into a little more detail about how the fix resolves it: not only do they parse the values more carefully, but variables that are used to pass exported functions follow a special naming convention. This naming convention is different from that used for the environment variables created for CGI scripts, so an HTTP client should never be able to get its foot into this door.
The following:
x='() { echo I do nothing; }; echo vulnerable' bash -c 'typeset -f'
prints
vulnerable
x ()
{
echo I do nothing
}
declare -fx x
seems, than Bash, after having parsed the x=..., discovered it as a function, exported it, saw the declare -fx x and allowed the execution of the command after the declaration.
echo vulnerable
x='() { x; }; echo vulnerable' bash -c 'typeset -f'
prints:
vulnerable
x ()
{
echo I do nothing
}
and running the x
x='() { x; }; echo Vulnerable' bash -c 'x'
prints
Vulnerable
Segmentation fault: 11
segfaults - infinite recursive calls
It doesn't overrides already defined function
$ x() { echo Something; }
$ declare -fx x
$ x='() { x; }; echo Vulnerable' bash -c 'typeset -f'
prints:
x ()
{
echo Something
}
declare -fx x
e.g. the x remains the previously (correctly) defined function.
For the Bash 4.3.25(1)-release the vulnerability is closed, so
x='() { echo I do nothing; }; echo Vulnerable' bash -c ':'
prints
bash: warning: x: ignoring function definition attempt
bash: error importing function definition for `x'
but - what is strange (at least for me)
x='() { x; };' bash -c 'typeset -f'
STILL PRINTS
x ()
{
x
}
declare -fx x
and the
x='() { x; };' bash -c 'x'
segmentation faults too, so it STILL accept the strange function definition...
I think it's worth looking at the Bash code itself. The patch gives a bit of insight as to the problem. In particular,
*** ../bash-4.3-patched/variables.c 2014-05-15 08:26:50.000000000 -0400
--- variables.c 2014-09-14 14:23:35.000000000 -0400
***************
*** 359,369 ****
strcpy (temp_string + char_index + 1, string);
! if (posixly_correct == 0 || legal_identifier (name))
! parse_and_execute (temp_string, name, SEVAL_NONINT|SEVAL_NOHIST);
!
! /* Ancient backwards compatibility. Old versions of bash exported
! functions like name()=() {...} */
! if (name[char_index - 1] == ')' && name[char_index - 2] == '(')
! name[char_index - 2] = '\0';
if (temp_var = find_function (name))
--- 364,372 ----
strcpy (temp_string + char_index + 1, string);
! /* Don't import function names that are invalid identifiers from the
! environment, though we still allow them to be defined as shell
! variables. */
! if (legal_identifier (name))
! parse_and_execute (temp_string, name, SEVAL_NONINT|SEVAL_NOHIST|SEVAL_FUNCDEF|SEVAL_ONECMD);
if (temp_var = find_function (name))
When Bash exports a function, it shows up as an environment variable, for example:
$ foo() { echo 'hello world'; }
$ export -f foo
$ cat /proc/self/environ | tr '\0' '\n' | grep -A1 foo
foo=() { echo 'hello world'
}
When a new Bash process finds a function defined this way in its environment, it evalutes the code in the variable using parse_and_execute(). For normal, non-malicious code, executing it simply defines the function in Bash and moves on. However, because it's passed to a generic execution function, Bash will correctly parse and execute additional code defined in that variable after the function definition.
You can see that in the new code, a flag called SEVAL_ONECMD has been added that tells Bash to only evaluate the first command (that is, the function definition) and SEVAL_FUNCDEF to only allow functio0n definitions.
In regard to your question about documentation, notice here in the commandline documentation for the env command, that a study of the syntax shows that env is working as documented.
There are, optionally, 4 possible options
An optional hyphen as a synonym for -i (for backward compatibility I assume)
Zero or more NAME=VALUE pairs. These are the variable assignment(s) which could include function definitions.
Note that no semicolon (;) is required between or following the assignments.
The last argument(s) can be a single command followed by its argument(s). It will run with whatever permissions have been granted to the login being used. Security is controlled by restricting permissions on the login user and setting permissions on user-accessible executables such that users other than the executable's owner can only read and execute the program, not alter it.
[ spot#LX03:~ ] env --help
Usage: env [OPTION]... [-] [NAME=VALUE]... [COMMAND [ARG]...]
Set each NAME to VALUE in the environment and run COMMAND.
-i, --ignore-environment start with an empty environment
-u, --unset=NAME remove variable from the environment
--help display this help and exit
--version output version information and exit
A mere - implies -i. If no COMMAND, print the resulting environment.
Report env bugs to bug-coreutils#gnu.org
GNU coreutils home page: <http://www.gnu.org/software/coreutils/>
General help using GNU software: <http://www.gnu.org/gethelp/>
Report env translation bugs to <http://translationproject.org/team/>