bash script overrides hard coded variables in executed second script - bash

I'm calling Uncle. I'm attempting to manipulate variables that have hard coded values in a second bash script I am calling. I have no control over the script and am building a wrapper around it to adjust some build behavior before it finally kicks off a yocto build. I'm not sure what else to try after reading and trying numerous examples.
Examples of the situation:
build.sh calls build2.sh
IS_DEV=1 ./build2.sh #trying to override value
build2.sh
IS_DEV=0 # hardcoded value
echo $IS_DEV
# always results in 0.
I have also tried export IS_DEV=1 before calling build2.sh.
I'm sure this is pretty simple, but I cannot seem to get this to work. I appreciate any assistance. Is this possible? I'm using GNU bash, version 4.3.48(1)-release (x86_64-pc-linux-gnu) on Ubuntu 16.04.4 LTS.
Oh, I have also tried the sourcing technique with no luck.
IS_DEV=1 . ./build2.sh
IS_DEV=1 source ./build2.sh
Where am I getting this wrong?
Much appreciated.

If you can't modify the script, execute a modified version of it.
sed 's/^IS_DEV=0 /IS_DEV=1 /' build2.sh | sh
Obviously, pipe to bash if you need Bash semantics instead of POSIX sh semantics.
If the script really hard-codes a value with no means to override it from the command line, modifying that script is the only possible workaround. But the modification can be ephemeral; the above performs a simple substitution on the script, then passes the modified temporary copy through a pipe to a new shell instance for execution. The modification only exists in the pipeline, and doesn't affect the on-disk version of build2.sh.

Related

missed $ while using variableName in bash script - how to catch such issues?

I have a bash script which does the following:
#!/bin/bash
moduleName=$1
someInfo=`ls | grep -w moduleName`
echo $someInfo
In the line #3, I was supposed to use $moduleName, but I missed it.
Is there any way to find such issues in Bash scripts?
I used shell Check, but it didn't report this issue.
For me, the script looks fine; it lists the files whose name contain the string moduleName.
The script is syntactically correct.
The error in line #3 is a semantic error; it changes the meaning of the script. Only a person that knows the intention of the script can detect it.
There is no way to automatically detect such errors, unless you write a software that reads your mind and knows that you intended to write $moduleName and you mistakenly wrote moduleName instead.

Using variables between files in shell / bash scripting

This question has been posted here many times, but it never seems to answer my question.
I have two scripts. The first one contains one or multiple variables, the second script needs those variables. The second script also needs to be able to change the variables in the first script.
I'm not interested in sourcing (where the first script containing the variables runs the second script) or exporting (using environment variables). I just simply want to make sure that the second script can read and change (get and set) the variables available in the first script.
(PS. If I misunderstood how sourcing or exporting works, and it applies to my scenario, please let me know. I'm not completely closed to those methods, after what I've read, I just don't think those things will do what I want)
Environment variables are per process. One process can not modify the variables in another. What you're asking for is not possible.
The usual workaround for scripts is sourcing, which works by running both scripts in the same shell process, but you say you don't want to do that.
I've also given this some thought. I would use files as variables. For example in script 1 you use for writing variable values to files:
echo $varnum1 > /home/username/scriptdir/vars/varnum1
echo $varnum2 > /home/username/scriptdir/vars/varnum2
And in script 2 you use for reading values from files back into variables:
$varnum1=$(cat /home/username/scriptdir/vars/varnum1)
$varnum2=$(cat /home/username/scriptdir/vars/varnum2)
Both scripts can read or write to the variables at any given time. Theoretically two scripts can try to access the same file at the same time, I'm not sure what exactly would happen but since each file only contains one value, the time to read or write should be extremely short.
In order to even reduce those times you can use a ramdisk.
I think this is much better than scripts editing each other (yuk!). Live editing of scripts can mess up scripts and only works when you initiate the script again after the edit was made.
Good luck!
So after a long search on the web and a lot of trying, I finally found some kind of a solution. Actually, it's quite simple.
There are some prerequisites though.
The variable you want to set already has to exist in the file you're trying to set it in (I'm guessing the variable can be created as well when it doesn't exist yet, but that's not what I'm going for here).
The file you're trying to set the variable in has to exist (obviously. I'm guessing again this can be done as well, but again, not what I'm going for).
Write
sudo sed -i 's/^\(VARNAME=\).*/\1VALUE/' FILENAME
So i.e. setting the variable called Var1 to the value 5, in the file
test.ini:
sudo sed -i 's/^\(Var1=\).*/\15/' test.ini
Read
sudo grep -Po '(?<=VARNAME=).*' FILENAME
So i.e. reading the variable called Var1 from the file test.ini
sudo grep -Po '(?<=Var1=).*' test.ini
Just to be sure
I've noticed some issues when running the script that sets variables from a different folder than the one where your script is located.
To make sure this always go right, you can do one of two things:
sudo sed -i 's/^\(VARNAME=\).*/\1VALUE/' `dirname $0`/FILENAME
So basically, just put `dirname $0`/ (including the backticks) in front of the filename.
The other option is to make `dirname $0`/ a variable (again including the backticks), which would look like this.
my_dir=`dirname $0`
sudo sed -i 's/^\(VARNAME=\).*/\1VALUE/' $my_dir/FILENAME
So basically, if you've got a file named test.ini, which contains this line: Var1= (In my tests, the variable can start empty, and you will still be able to set it. Mileage may vary.), you will be able to set and get the value for Var1
I can confirm that this works (for me), but since you all, with way more experience in scripting then me, didn't come up with this, I'm guessing this is not a great way to do it.
Also, I couldn't tell you the first thing about what's happening in those commands above, I only know they work.
So if I'm doing something stupid, or if you can explain to me what's happening in the commands above, please let me know. I'm very curious to find out what you guys think if this solution.

shell >& operator?

I have a question about what I think is an operator or argument passer but google hasn't turned up anything. The script this is contained in is
#!/bin/sh
ln mopac.in FOR005
mopac >& FOR006
mv FOR006 mopac.out
When I call "mopac mopac.in", the program runs fine, but, for my needs, mopac is called within another program by using this script, but it seems like the input file is failing to pass so mopac is not running. I don't understand what the ">&" is supposed to do so I am having problems troubleshooting.
Thanks.
>& FILE is deprecated bash (from csh) shorthand for > FILE 2>&1, that is, redirect both standard output and standard error. (If /bin/sh is not bash, as is true on a number of Linux distributions, this will elicit an error.) Older bash (before 3.0) preferred this form, so most newer bash still understand it, although possibly very recent bash has finally removed it as they seem to finally be removing deprecated constructs of late.
Your script there is not passing mopac.in at all, but appears to be assuming that mopac will read its input from FOR005, so uses ln to make it available there. Perhaps you should change the script to read mopac.in as a parameter, just as you're running it directly.
Explanation here : http://tldp.org/LDP/abs/html/io-redirection.html
>&j
# Redirects, by default, file descriptor 1 (stdout) to j.
# All stdout gets sent to file pointed to by j.

How to get the full pathname of the current shell script?

Is there a less brute-force way to do this?
#!/bin/ksh
THIS_SCRIPT=$(/usr/bin/readlink -f $(echo $0 | /bin/sed "s,^[^/],$PWD/&,"))
echo $THIS_SCRIPT
I'm stuck using ksh but would prefer a solution that works in bash too (which I think this does).
Entry #28 in the bash FAQ:
How do I determine the location of my script? I want to read some config files from the same place.
There are two prime reasons why this issue comes up: either you want to externalize data or configuration of your script and need a way to find these external resources, or your script is intended to act upon a bundle of some sort (eg. a build script), and needs to find the resources to act upon.
It is important to realize that in the general case, this problem has no solution. Any approach you might have heard of, and any approach that will be detailed below, has flaws and will only work in specific cases. First and foremost, try to avoid the problem entirely by not depending on the location of your script!
...
Using BASH_SOURCE
The BASH_SOURCE internal bash variable is actually an array of pathnames. If you expand it as a simple string, e.g. "$BASH_SOURCE", you'll get the first element, which is the pathname of the currently executing function or script.
I've always done:
SCRIPT_PATH=$(cd `dirname ${0}`; pwd)
I've never used readlink before: is it Gnu only? (i.e. will it work on HP-UX, AIX, and Solaris out of the box? dirname and pwd will....)
(edited to add `` which I forgot in original post. d'oh!)
(edit 2 to put on two lines which I've apparently always done when I look at previous scripts I'd written, but hadn't remembered properly. First call gets path, second call eliminates relative path)
(edit 3 fixed typo that prevented single line answer from working, back to single line!)
Why didn't I think to try this before I asked the question?
THIS_SCRIPT=$(/usr/bin/readlink -nf "$0")
Works great.
In macOS I use (edit: only works if you run the script from where the script actually is!)
my_script=$(pwd)/$(basename $0)

Can a shell script indicate that its lines be loaded into memory initially?

UPDATE: this is a repost of How to make shell scripts robust to source being changed as they run
This is a little thing that bothers me every now and then:
I write a shell script (bash) for a quick and dirty job
I run the script, and it runs for quite a while
While it's running, I edit a few lines in the script, configuring it for a different job
But the first process is still reading the same script file and gets all screwed up.
Apparently, the script is interpreted by loading each line from the file as it is needed. Is there some way that I can have the script indicate to the shell that the entire script file should be read into memory all at once? For example, Perl scripts seem to do this: editing the code file does not affect a process that's currently interpreting it (because it's initially parsed/compiled?).
I understand that there are many ways I could get around this problem. For example, I could try something like:
cat script.sh | sh
or
sh -c "`cat script.sh`"
... although those might not work correctly if the script file is large and there are limits on the size of stream buffers and command-line arguments. I could also write an auxiliary wrapper that copies a script file to a locked temporary file and then executes it, but that doesn't seem very portable.
So I was hoping for the simplest solution that would involve modifications only to the script, not the way in which it is invoked. Can I just add a line or two at the start of the script? I don't know if such a solution exists, but I'm guessing it might make use of the $0 variable...
The best answer I've found is a very slight variation on the solutions offered to How to make shell scripts robust to source being changed as they run. Thanks to camh for noting the repost!
#!/bin/sh
{
# Your stuff goes here
exit
}
This ensures that all of your code is parsed initially; note that the 'exit' is critical to ensuring that the file isn't accessed later to see if there are additional lines to interpret. Also, as noted on the previous post, this isn't a guarantee that other scripts called by your script will be safe.
Thanks everyone for the help!
Use an editor that doesn't modify the existing file, and instead creates a new file then replaces the old file. For example, using :set writebackup backupcopy=no in Vim.
How about a solution to how you edit it.
If the script is running, before editing it, do this:
mv script script-old
cp script-old script
rm script-old
Since the shell keep's the file open as long as you don't change the contents of the open inode everything will work okay.
The above works because mv will preserve the old inode while cp will create a new one. Since a file's contents will not actually be removed if it is opened, you can remove it right away and it will be cleaned up once the shell closes the file.
According to the bash documentation if instead of
#!/bin/bash
body of script
you try
#!/bin/bash
script=$(cat <<'SETVAR'
body of script
SETVAR)
eval "$script"
then I think you will be in business.
Consider creating a new bang path for your quick-and-dirty jobs. If you start your scripts with:
#!/usr/local/fastbash
or something, then you can write a fastbash wrapper that uses one of the methods you mentioned. For portability, one can just create a symlink from fastbash to bash, or have a comment in the script saying one can replace fastbash with bash.
If you use Emacs, try M-x customize-variable break-hardlink-on-save. Setting this variable will tell Emacs to write to a temp file and then rename the temp file over the original instead of editing the original file directly. This should allow the running instance to keep its unmodified version while you save the new version.
Presumably, other semi-intelligent editors would have similar options.
A self contained way to make a script resistant to this problem is to have the script copy and re-execute itself like this:
#!/bin/bash
if [[ $0 != /tmp/copy-* ]] ; then
rm -f /tmp/copy-$$
cp $0 /tmp/copy-$$
exec /tmp/copy-$$ "$#"
echo "error copying and execing script"
exit 1
fi
rm $0
# rest of script...
(This will not work if the original script begins with the characters /tmp/copy-)
(This is inspired by R Samuel Klatchko's answer)

Resources