Error "exporting" shell functions from within a Java process - bash

I have a script which looks like this:
#!/bin/bash
function func1() {
echo "HELLO!"
}
export -f func1
function func2() {
echo "HI!!"
func1
}
export -f func2
I locally start a hadoop tasktracker and my /usr/lib/hadoop/conf/hadoop-env.sh looks something like this:
# .. few configuration params
# source my_shell_file.sh
# my_function
When I start the tasktracker everything is fine. It prints out a couple of echo statements that I have inside my_function. When I start a hadoop job with a 'mapper.py' file, it works normally. It even takes the configuration params present in my_function . The problem occurs when I declare my mapper as
-mapper 'bash -c "func1 ; python mapper.py"'
It then throws this error:
/bin/bash: func2: line 1: syntax error: unexpected end of file
/bin/bash: error importing function definition for `func2'
/bin/bash: func1: line 1: syntax error: unexpected end of file
/bin/bash: error importing function definition for `func1'
I'm not sure what is happening here. I tried 'sh -c' instead of 'bash -c' and I get the same issue.
EDIT: The shell script works fine when I "source" it on the console. It recognizes the functions defined in the shell file on the console.
EDIT2: Added the EXACT contents of the shell file.

This is a bug in Hadoop.
Bash functions are passed as regular environment variables so that exporting works across processes:
foo='() { echo "hello world"; }' bash -c 'foo'
The environment variables generated bash will contain multiple lines, which is normally fine.
Hadoop Streaming, however, has a badly written Environment class that tries to reimplement System.getenv() by naively parsing the output of env.
Since it doesn't handle multi-line variables, it destroys your functions.
I tried to submit a bug report, but I didn't have a user for their bug tracker and I didn't want to subscribe to their mailing list. If you'd like this fix, I suggest you submit your own.

Related

Bash execute multiple command stored in variable without eval

I have a script wrapper.sh it takes a string as an argument.
wrapper.sh
#!/usr/bin/env bash
node ./index.js $1
Now if I pass argument as hello it runs fine but if I pass hello&pwd then it passes full string as an argument to the nodejs file instead of just passing hello in nodejs and running pwd separately.
Example
./wrapper.sh "hello"
# nodejs gets argument hello : Expected
./wrapper.sh "hello&pwd"
# nodejs gets argument hello&pwd : Not Expected
# Requied only hello in nodejs while pwd running separately
I have tried a lot of solutions online but none seem to work except eval and bash -c which I don't want to use because the script doesn't wait for these commands to finish.
Edit
wrapper.sh is executed by a third party software and the content of the script is dynamically configured by the user so there's nothing much in my hand. Job of my module is to just setup the script properly that it is executed by the third party software.

How to write to file in bash without parsing?

While trying to write the following command in bash, with variable paths to a file in /etc/languagetools/language_tool.sh:
java -jar /path/to/languagetools/languagetool-commandline.jar "${#:1}"
I am experiencing some difficulties in preventing evaluation of the "${#:1}". The function that performs the parsing contains:
#!/bin/bash
some_function() {
local create_target_file=$(sudo touch
$LANGUAGE_TOOL_TARGET_DIR/$LANGUAGE_TOOL_CONTROL_SCRIPTNAME)
local make_readable=$(chmod 777 $LANGUAGE_TOOL_TARGET_DIR/$LANGUAGE_TOOL_CONTROL_SCRIPTNAME)
command_one="java -jar
$LANGUAGE_TOOL_TARGET_DIR/$LANGUAGE_TOOL_SNAPSHOT_DIRNAME/$LANGUAGE_TOOL_TARGET_FILENAME "
command_two='${#:1}'
local write_content_to_file=$(sudo sh -c "echo $command_one$command_two > $LANGUAGE_TOOL_TARGET_DIR/$LANGUAGE_TOOL_CONTROL_SCRIPTNAME")
}
Which returns:
sh: 1: Bad substitution
Hence, I was curious, how to write the command string to file, without parsing the content in the command?
You don't need a command substitution or variable assignment. Just
echo "$command_one$command_two" > "$LANGUAGE_TOOL_TARGET_DIR/$LANGUAGE_TOOL_CONTROL_SCRIPTNAME"

ubuntu function, works when sourced, but not with the bash command

I'm trying to learn how to write some basic functions in Ubuntu, and I've found that some of them work, and some do not, and I can't figure out why.
Specifically, the following function addseq2.sh will work when I source it, but when I just try to run it with bash addseq2.shit doesn't work. When I check with $? I get a 0: command not found. Does anyone have an idea why this might be the case? Thanks for any suggestions!
Here's the code for addseq2.sh:
#!/usr/bin/env bash
# File: addseq2.sh
function addseq2 {
local sum=0
for element in $#
do
let sum=sum+$element
done
echo $sum
}
Thanks everyone for all the useful advice and help!
To expand on my original question, I have two simple functions already written. The first one, hello.sh looks like this:
#!/usr/bin/env bash
# File: hello.sh
function hello {
echo "Hello"
}
hello
hello
hello
When I call this function, without having done anything else, I would type:
$ bash hello.sh
Which seems to work fine. After I source it with $ source hello.sh, I'm then able to just type hello and it also runs as expected.
So what has been driving me crazy is the first function I mentioned here, addseq2.sh. If I try to repeat the same steps, calling it just with $ bash addseq2.sh 1 2 3. I don't see any result. I can see after checking as you suggested with $ echo $?that I get a 0 and it executed correctly, but nothing prints to the screen.
After I source it with $ source addseq2.sh, then I call it just by typing $ addseq2 1 2 3 it returns 6 as expected.
I don't understand why the two functions are behaving differently.
When you do bash foo.sh, it spawns a new instance of bash, which then reads and executes every command in foo.sh.
In the case of hello.sh, the commands are:
function hello {
echo "Hello"
}
This command has no visible effects, but it defines a function named hello.
hello
hello
hello
These commands call the hello function three times, each printing Hello to stdout.
Upon reaching the end of the script, bash exits with a status of 0. The hello function is gone (it was only defined within the bash process that just stopped running).
In the case of addseq2.sh, the commands are:
function addseq2 {
local sum=0
for element in $#
do
let sum=sum+$element
done
echo $sum
}
This command has no visible effects, but it defines a function named addseq2.
Upon reaching the end of the script, bash exits with a status of 0. The addseq2 function is gone (it was only defined within the bash process that just stopped running).
That's why bash addseq2.sh does nothing: It simply defines (and immediately forgets) a function without ever calling it.
The source command is different. It tells the currently running shell to execute commands from a file as if you had typed them on the command line. The commands themselves still execute as before, but now the functions persist because the bash process they were defined in is still alive.
If you want bash addseq2.sh 1 2 3 to automatically call the addseq2 function and pass it the list of command line arguments, you have to say so explicitly: Add
addseq2 "$#"
at the end of addseq2.sh.
When I check with $? I get a 0: command not found
This is because of the way you are checking it, for example:
(the leading $ is the convention for showing the command-line prompt)
$ $?
-bash: 0: command not found
Instead you could do this:
$ echo $?
0
By convention 0 indicated success. A better way to test in a script is something like this:
if addseq.sh
then
echo 'script worked'
else
# Redirect error message to stderr
echo 'script failed' >&2
fi
Now, why might your script not "work" even though it returned 0? You have a function but you are not calling it. With your code I appended a call:
#!/usr/bin/env bash
# File: addseq2.sh
function addseq2 {
local sum=0
for element in $#
do
let sum=sum+$element
done
echo $sum
}
addseq2 1 2 3 4 # <<<<<<<
and I got:
10
By the way, an alternative way of saying:
let sum=sum+$element
is:
sum=$((sum + element))

Script not working as Command line

i've created simple bash script that do the following
:
#!/usr/bin/env bash
cf ssh "$1"
When I run the command line from the CLI like cf ssh myapp its running as expected, but when I run the script like
. myscript.sh myapp
I got error: App not found
I dont understand what is the difference, I've provided the app name after I invoke the script , what could be missing here ?
update
when I run the script with the following its working, any idea why the "$1" is not working ...
#!/usr/bin/env bash
cf ssh myapp
When you do this:
. myscript.sh myapp
You don't run the script, but you source the file named in the first argument. Sourcing means reading the file, so it's as if the lines in the file were typed on the command line. In your case what happens is this:
myscript.sh is treates as the file to source and the myapp argument is ignored.
This line is treated as a comment and skipped.
#!/usr/bin/env bash
This line:
cf ssh "$1"
is read as it stands. "$1" takes the value of $1 in the calling shell. Possibly - most likely in your case - it's blank.
Now you should know why it works as expected when you source this version of your script:
#!/usr/bin/env bash
cf ssh myapp
There's no $1 to resolve, so everything goes smoothly.
To run the script and be able to pass arguments to it, you need to make the file executable and then execute it (as opposed to sourcing). You can execute the script for example this way:
./script.bash arg1 arg2

Unable to pass parameters to a perl script inside a bash script

I would like to pass parameters to a perl script using positional parameters inside a bash script "tablecheck.sh". I am using an alias "tablecheck" to call "tablecheck.sh".
#!/bin/bash
/scripts/tables.pl /var/lib/mysql/$1/ /var/mysql/$1/mysql.sock > /tmp/chktables_$1 2>&1 &
Perl script by itself works fine. But when I do "tablecheck MySQLinstance", $1 stays $1. It won't get replaced by the instance. So I get the output as follows:
Exit /scripts/tables.pl /var/lib/mysql/$1/ /var/mysql/$1/mysql.sock > /tmp/chktables_$1 2>&1 &
The job exits.
FYI: alias tablecheck='. pathtobashscript/tablecheck.sh'
I have a bunch of aliases in another bash script. Hence . command.
Could anyone help me... I have gone till the 3rd page of Google to find an answer. Tried so many things with no luck.
I am a noob. But may be it has something to do with it being a background job or $1 in a path... I don't understand why the $1 won't get replaced...
If I copy your exact set up (which I agree with other commenters, is some what unusual) then I believe I am getting the same error message
$ tablecheck foo
[1]+ Exit 127 /scripts/tables.pl /var/lib/mysql/$1/ /var/mysql/$1/mysql.sock > /tmp/chktables_$1 2>&1
In the /tmp/chktables_foo file that it makes there is an additional error message, in my case "bash: /scripts/tables.pl: No such file or directory"
I suspect permissions are wrong in your case

Resources