Using a function in Upstart configuration - bash

I am writing an upstart configuration file. Within it, I have a pre-start script, a script, and a pre-stop script stanza. Each of these had a large amount of identical code. Thus, I attempted to refactor this code into a few bash functions. However, in so doing I discovered that "one does not simply write bash in an upstart configuration file". A bash function keyword is not allowed as it is interpreted as a stanza, and it isn't a valid upstart stanza.
# Some header stanzas that are boring
...
env ONE_OF_A_FEW_VARIABLES
...
function an_illegal_function () {...}
pre-start script
function_call_A
function_call_B
end script
script
function_call_A
function_call_C
echo "I love pie."
end script
pre-stop script
function_call_B
function_call_C
end script
I would really like to avoid the kind of code duplication that will exist if I have to copy-paste the contents of each function into stanzas like those above. How can I get some bash commands DRY'd up into a common location and have each of my *script stanzas reference them?

Related

How to invoke functions between two shell scripts?

I have functions to logInfo()/logError() in a shell script (logger.sh). There are other shell scripts (example: createuser.sh) which require logging. How to invoke functions like logInfo() from createuser.sh
Without function invocations, these logInfo/logError functions are getting copied to every shell script that requires logging.
Put your logger functions in a separate file (that only contains functions, no commands), say myfuncs.sh. Then in any other script that needs these functions, somewhere near the top of that script add a line:
. myfuncs.sh
or, equivalently:
source myfuncs.sh
The functions in myfuncs.sh will then be available in that script.
If the only thing in the logger.sh script is the functions (ie: nothing runsfid you execute it from the command line, then you can source the shell script by including the line:
. logger.sh
See: https://ss64.com/bash/source.html

If I curl a raw script and pipe that output into a bash interpreter, how do I preserve an environment variable that was set in that retrieved script?

So I'm using a conjunction of npm's scripts (defined in package.json) and regular bash scripting.
In the overarching script, there is a point in execution where I write npm run set-my-env where the variable FOO is set:
npm run set-my-env
... do stuff with $FOO
I'd like to make a variable accessable in a way that calling the BASH interpreter doesn't tell me it's undefined. The issue being that $FOO isn't defined when I return to the script. I get that this is how processes work and whatnot, but this can't be a novel problem right?
set-my-env actually curls a GitHub endpoint that pipes the raw input into the bash interpreter. Not sure if this makes the overall issue any more convoluted but thought I should mention it.
What's the proper way to do this?

UNIX/solaris shell script shebang in include file

I have a functions.sh script with a bunch of global functions that i want to use in other scripts. this functions script is written in bash (#!/bin/bash)
Those many scripts had been written over the years, so the older ones or with the #!/bin/sh (which is different from #!/bin/bash in solaris).
My question here is, when you call the functions.sh file (with . /path/to/functions.sh) from within a sh (not bash) script, is the shebang line of "functions.sh" interpreted ?
In a nutshell, can you call a bash written function script from another shell-type script (with proper shebang lines in both) ?
Thanks !
As long as you want to use the function you need to source the scripts and not execute it
source /path/to/functions.sh
or as per the POSIX standards, do
. ./path/to/functions.sh
from within the sh script, which is equivalent to including the contents of function.sh in the file at the point where the command is run.
You need to understand the difference between sourcing and executing a script.
Sourcing runs the script from the parent-shell in which the script is
invoked; all the environment variables, functions are retained until the
parent-shell is terminated (the terminal is closed, or the variables
are reset or unset),
Execute forks a new shell from the parent shell and those variables,functions
including your export variables are retained only in the sub-shell's
environment and destroyed at the end of script termination.
When you source a file, the shebang in that file is ignored (it is not on the first line since it is included in the caller script and is seen as comment).
When you include an old script with #!/bin/sh it will be handled as the shell of the caller. Most things written in /bin/sh will work in bash.
When you are running a sh or ksh script and you include (source) a bash file, all bash specific code will give problems.

How to pass shell script variables back to ruby?

I have a ruby script executing a shell script. How can I pass shell script data back to the ruby script.
desc "Runs all the tests"
lane :test do
sh "../somescript.sh"
print variables_inside_my_script // i want to access my script data here.
end
I'm able to do the reverse scenario using environment variables from ruby to shell script.
desc "Runs all the tests"
lane :test do
puts ENV["test"]
sh "../somescript.sh" // access test using $test
end
Thanks
It's not so clear what variables_inside_my_script is supposed to mean here, but as a rule operating systems do not allow one to "export" variables from a subshell to the parent, so rubyists often invoke the subcommand with backticks (or equivalent) so that the parent can read the output (stdout) of the subshell, e.g.
output = %x[ ls ]
There are alternative techniques that may be useful depending on what you really need -- see e.g.
Exporting an Environment Variable in Ruby
http://tech.natemurray.com/2007/03/ruby-shell-commands.html
http://blog.bigbinary.com/2012/10/18/backtick-system-exec-in-ruby.html
If the shell script is under your control, have the script write the environment definitions in Ruby syntax to STDOUT. Within Ruby, you eval the output:
eval `scriptsettings.sh`
If your script produces other output, write the environment definitions to a temporary file and use the load command to read them.

How to make shell scripts robust to source being changed as they run

Have people noticed that if you modify the source of a shell script, any instances that are currently running are liable to fail?
This in my opinion is very bad; it means that I have to make sure all instances of a script are stopped before I make changes. My preferred behavior would be that existing scripts continue running with old source code and that new instances use the new code (e.g. what happens for perl and python programs).
Do folks have any good workarounds for this behavior, other than pre-copying the shell script to a tempfile and running from that?
Thanks,
/YGA
Very slight addition to the other answers:
#!/bin/sh
{
# Your stuff goes here
exit
}
The exit at the end is important. Otherwise, the script file might still be accessed at the end to see if there are any more lines to interpret.
This question was later reposted here: Can a shell script indicate that its lines be loaded into memory initially?
Make sure the shell has to parse the whole file before executing any of it:
#!/bin/ksh
{
all the original script here
}
That does the trick.
Incidentally, with Perl (and I assume Python), the program parses the entire file before executing any of it, exactly as recommended here. Which is why you don't usually run into the problem with Perl or Python.
The desired behavior may not be possible, depending on complexity of the shell scripts that are involved.
If the full shell script is contained in a single source file, and that file is fully parsed before execution, then the shell script is generally safe from modifications to the copy on the disc during execution. Wrapping all the executable statements into a function (or series of functions) will generally achieve the goal you are after.
#!/bin/sh
doit()
{
# Stuff goes here
}
# Main
doit
The difficulty comes when the shell script "includes" other shell scripts (e.g. ".", or "source"). If these includes are wrapped in a function, they are not parsed until that statement is reached in the flow of execution. This makes the shell script vulnerable to changes to that external code.
In addition, if the shell script runs any external program (e.g. shell script, compiled program, etc), that result is not captured until that point in the execution is reached (if ever).
#!/bin/sh
doit()
{
if [[some_condition]] ; then
resultone=$(external_program)
fi
}
# Main
doit
this answer contains a robust and self contained way to make a script resistant to this problem: have the script copy and re-execute itself like this:
#!/bin/bash
if [[ $0 != /tmp/copy-* ]] ; then
rm -f /tmp/copy-$$
cp $0 /tmp/copy-$$
exec /tmp/copy-$$ "$#"
echo "error copying and execing script"
exit 1
fi
rm $0
# rest of script...
(This will not work if the original script begins with the characters /tmp/copy-)

Resources