Run external process from groovy - bash

i have a bash script which i want to execute from groovy like
some_shell_script.sh param1 "report_date=`some_function 0 \"%Y%m%d\"`"
that script runs successfully from the command line, but when i try to execute it from Groovy
def command = "some_shell_script.sh param1 "report_date=`some_function 0 \"%Y%m%d_%H%M%S\"`""
def sout = new StringBuffer()
def serr = new StringBuffer()
//tried to use here different shells /bin/sh /bin/bash bash
ProcessBuilder pb = new ProcessBuilder(['sh', '-c',command])
Process proc = pb.start()
proc.consumeProcessOutput(sout, serr)
def status = proc.waitFor()
println 'sout: ' + sout
println 'serr: ' + serr
i have the following error
serr: sh: some_function: command not found
at the same time
which some_function
returns functional definition like
some_function ()
{
;some definition here
}
looks like when i run external script from groovy it start different process without context of parent process. I mean no function definitions of parent process are exists.
Anyone have cue how to cope with such a situation?

You should replace the double quotes in your command definition with single quotes.
def command = 'some_shell_script.sh param1 "report_date=`some_function 0 "%Y%m%d_%H%M%S"`'
Add:
println command
to ensure that you are executing the correct command.
Also open a new bash shell and ensure that some_function is defined.

Definitely check out those quotes as indicated by #Reimeus. I had some doubts about those.
In addition, some_function() may be defined in ~/.bashrc, /etc/bash.bashrc or in a file sourced by either of those when you run bash interactively. This does not happen if you run a script.
(Which is good for making script run predictably - you can't have your script depend on people's login environment.)
If this is the case, move some_function() to another file, and put its full path in the BASH_ENV variable, so that bash picks it up when processing scripts.
man bash:
When bash is started non-interactively, to run a shell script, for
example, it looks for the variable BASH_ENV in the environment, expands
its value if it appears there, and uses the expanded value as the name
of a file to read and execute. Bash behaves as if the following com-
mand were executed:
if [ -n "$BASH_ENV" ]; then . "$BASH_ENV"; fi
but the value of the PATH variable is not used to search for the file
name.
[Manual page bash(1) line 158]

This seems a path problem. Can you put the full path to the script and try again?

DISCLAIMER: there are limitations with this solution, and, the shell sub-script commands should be properly tested before deployment. However if multithreading were not required e.g. the function provides immediately some short results, there is an alternative as I implemented in here.
For instance, if the result of mycmd depends on an environment variable set in ~/.bashrc I could display its result: (tried as a groovy-script/v1.8.1, and yes, this is a stupid example and it might be risky!)
commands = '''source ~/.bashrc; cd ~/mytest; ./mycmd'''
"bash".execute().with{
out << commands
out << ';exit $?\n'
waitFor()
[ok:!exitValue(), out:in.text, err:err.text]
}.with{ println ok?out:err }

Related

Facing error while running tcl script with an argument using source command

I am trying to source a tcl script inside another script using source command . The syntax i am using is as follows :
source /path/script.tcl vikas #vikas is as a argument to the script
But i am facing an issue while executing it. Th the error i am getting is as follows:
TCLERR: couldn't read file "/path/script.tcl vikas" : no such file or directory.
kindly help me with the solution .
Thank You !
The source command doesn't pass arguments; it just reads the script in and evaluates it (with a minor nuance for info script).
How would you expect the arguments to be seen by the script? If it is via the argv global variable, then you can just set that up before calling source. It's not special at all except that tclsh and wish write the list of arguments to it during start up.
You can script things easily enough.
proc sourceWithArguments {filename args} {
global argv
set old $argv
try {
set argv $args
uplevel "#0" [list source $filename]
} finally {
# Restore the original arguments at the end
set argv $old
}
}
The source <file> command simply reads the commands in <file> almost just like you copy-pasted the commands.
If you have a main file and other file which is sourced from the main file, then you could just set a variable in the main file and use that variable in the sourced file.
# sourced.tcl
puts $parameter_from_main
# main.tcl
set parameter_from_main "Hello"
source sourced.tcl
In this case, both the main.tcl and sourced.tcl files are running in the same global scope. Some people may dislike this solution because you can get namespace pollution, but it might be good enough for what you need to do.

Bash function variable command not found error

I have a bash script like this with a function
_launch()
{
${1}
}
testx()
{
_launch "TESTX=1 ls -la"
}
testx
I get error "TESTX=1 command not found" in _launch function. Why?
When I run TESTX=1 ls -la directly on shell it works fine.
It's not a good idea to use variables to hold commands. See BashFAQ/050
As long as you are dealing with executables and not shell built-ins, you could do this:
_launch() {
env $1
}
This won't play well in case you have literal spaces in values used in var=value pairs or arguments to the command being launched.
You can overcome this problem by just passing the command to the launch function and setting your variables in function invocation itself, like this:
_launch() {
# your launch prep steps here...
"$#" # run the command
# post launch code here
}
TESTX=1 TESTY=2 TESTZ=3 _launch ls -la
The variables would be passed down to the launched command as environment variables.
You get the error because first looks at the statement to see whether we have a variable assignment, and then does parameter expansion. In your case, bash doesn't recognize that you want to extend the environment for your ls command, and treats TESTX=1 as command to be executed.
For the same reason, the following does not set the bash variable ABC:
x='ABC=55'
$x
This would print ABC=55: command not found.

How to use the source command with the system() function?

I need to source a few environment variables in another file. If I use the source command with system() function, it's complaining about "No such file or directory". Am I missing something?
My code looks like below. In my code, I have only the system() function running the source command. The source file has just only one command: pwd (Present working directory).
perl_system.pl
#!/usr/bin/perl
system "source env.mk"
env.mk (contents of env.mk which I want to source has just pwd for now"
pwd
When I run this command, I see the below error
$ perl -w perl_system.pl
Can't exec "source": No such file or directory at perl_system.pl line 2.
source is a shell built-in that executes a shell script using the current shell interpreter. So it doesn't work as an external command and won't change the environment of your perl process even if you change your system call to invoke a shell instead of it trying to run an external program directly.
You could run your env.mk and then output the resulting environment and update perl's environment accordingly, though:
for my $env (`bash -c 'source env.mk;env'`) {
chomp $env;
my ($var,$val) = split /=/, $env, 2;
$ENV{$var} = $val;
}
(with obvious problems if environment variables contain newlines).
Update: just read all of your question, not just the beginning. If all you want to do is execute a shell script, just do:
system "sh env.mk";
source is completely unnecessary for this.

Is there a good way to preload or include a script prior to executing another script?

I am looking to execute a script but have it include another script before it executes. The problem is, the included script would be generated and the executed script would be unmodifiable. One solution I came up with, was to actually reverse the include, by having the include script as a wrapper, calling set to set the arguments for the executed script and then dotting/sourcing it. E.g.
#!/bin/bash
# Generated wrapper or include script.
: Performing some setup...
target_script=$1 ; shift
set -- "$#"
. "$target_script"
Where target_script is the script I actually want to run, importing settings from the wrapper.
However, the potential problem I face is that callers of the target script or even the target script itself may be expecting $0 to be set to the path of it's location on the file system. But because this wrapper approach overrides $0, the value of $0 may be unexpected and could produce undefined behaviour.
Is there another way to perform what is in effect, an LD_PRELOAD but in the scripted form, through bash without interfering with its runtime parameters?
I have looked at --init-file or --rcfile, but these only seem to be included for interactive shells.
Forcing interactive mode does seem to allow me to specify --rcfile:
$ bash --rcfile /tmp/x-include.sh -i /tmp/xx.sh
include_script: $0=bash, $BASH_SOURCE=/tmp/x-include.sh
target_script: $0=/tmp/xx.sh, $BASH_SOURCE=/tmp/xx.sh
Content of the x-include.sh script:
#!/bin/bash
echo "include_script: \$0=$0, \$BASH_SOURCE=$BASH_SOURCE"
Content of the xx.sh script:
#!/bin/bash
echo "target_script: \$0=$0, \$BASH_SOURCE=$BASH_SOURCE"
From the bash documentation:
When bash is started non-interactively, to run a shell script, for example, it looks for the variable BASH_ENV in
the environment, expands its value if it appears there, and uses the expanded value as the name of a file to read
and execute. Bash behaves as if the following command were executed:
if [ -n "$BASH_ENV" ]; then . "$BASH_ENV"; fi
but the value of the PATH variable is not used to search for the file name.
So that settles it then:
BASH_ENV=/tmp/x-include.sh /bin/bash /tmp/xx.sh

call main script function in script files called inside in shell script

I am still learning to write shell scripting so i don't know whether this can be done.
I have a main script called main.sh
Main.sh
#!/bin/bash
function log {
echo "[${USER}][`date`] - ${*}" >> ${LOG_FILE}
}
home/script/loadFile.sh && home/script/processData.sh
So my question is can i call my log function of main.sh inside loadFile.sh and processData.sh script file ?
I tried it but i got error
line 1: log: command not found
Thanks.
This is not portable, but in bash you can simply export the function definition:
export -f log
home/script/loadFile.sh && home/script/processData.sh
you need to prompt like this:
. home/script/loadFile.sh && . home/script/processData.sh
But if you have an exit command in your loadFile.sh or processData.sh then your main.sh will exist as well
When you start loadFile.sh and processData.sh like you do, they are started as ordinary executables, so parent shell does not recognize then as shell scripts and new instance of shell interpreter is started for each script. New shell interpreter does not know anything about your log function.
When you run loadFile.sh and processData.sh like this:
. home/script/loadFile.sh && . home/script/processData.sh
Shell treats them as shell scripts rather than as ordinary executables and executes in current context, thus making function log visible to them. Also, any functions/variables defined inside loadFile.sh and processData.sh will be visible in parent shell after they will exit, and thus these scripts has many ways to damange parent shell, which makes such way unsafe in some situations.

Resources