I have a bash script which does the following:
#!/bin/bash
moduleName=$1
someInfo=`ls | grep -w moduleName`
echo $someInfo
In the line #3, I was supposed to use $moduleName, but I missed it.
Is there any way to find such issues in Bash scripts?
I used shell Check, but it didn't report this issue.
For me, the script looks fine; it lists the files whose name contain the string moduleName.
The script is syntactically correct.
The error in line #3 is a semantic error; it changes the meaning of the script. Only a person that knows the intention of the script can detect it.
There is no way to automatically detect such errors, unless you write a software that reads your mind and knows that you intended to write $moduleName and you mistakenly wrote moduleName instead.
Related
I'm calling Uncle. I'm attempting to manipulate variables that have hard coded values in a second bash script I am calling. I have no control over the script and am building a wrapper around it to adjust some build behavior before it finally kicks off a yocto build. I'm not sure what else to try after reading and trying numerous examples.
Examples of the situation:
build.sh calls build2.sh
IS_DEV=1 ./build2.sh #trying to override value
build2.sh
IS_DEV=0 # hardcoded value
echo $IS_DEV
# always results in 0.
I have also tried export IS_DEV=1 before calling build2.sh.
I'm sure this is pretty simple, but I cannot seem to get this to work. I appreciate any assistance. Is this possible? I'm using GNU bash, version 4.3.48(1)-release (x86_64-pc-linux-gnu) on Ubuntu 16.04.4 LTS.
Oh, I have also tried the sourcing technique with no luck.
IS_DEV=1 . ./build2.sh
IS_DEV=1 source ./build2.sh
Where am I getting this wrong?
Much appreciated.
If you can't modify the script, execute a modified version of it.
sed 's/^IS_DEV=0 /IS_DEV=1 /' build2.sh | sh
Obviously, pipe to bash if you need Bash semantics instead of POSIX sh semantics.
If the script really hard-codes a value with no means to override it from the command line, modifying that script is the only possible workaround. But the modification can be ephemeral; the above performs a simple substitution on the script, then passes the modified temporary copy through a pipe to a new shell instance for execution. The modification only exists in the pipeline, and doesn't affect the on-disk version of build2.sh.
There are a lot of tips (and warnings) on here for obfuscating various items within scripts.
I'm not trying to hide a password, I'm just wondering if I can obfuscate an actuall command within the script to defeat the casual user/grepper.
Background: We have a piece of software that helps manage machines within the environment. These machines are owned by the enterprise. The users sometimes get it in their heads that this computer is theirs and they don't want "The Man" looking over their shoulders.
I've developed a little something that will check to see if a certain process is running, and if not, clone it up and replace.
Again, the purpose of this is not to defeat anyone other than the casual user.
It was suggested that one could echo an octal value (the 'obfuscated' command) and use it as a variable within the script. e.g.:
strongBad=`echo "\0150\0157\0163\0164\0156\0141\0155\0145"`
I could then use $strongBad within the shell script to slyly call the commands that I wanted to call with arguments?
/bin/$strongBad -doThatThingYouDo -DoEEET
Is there any truth to this? So far it's worked via command line directly into shell (using the -e flag with echo) but not so much within the script. I'm getting unexpected output, perhaps the way I'm using it?
As a test, try this in the command line:
strongBad=`echo -e "\0167\0150\0157"`
And then
$strongBad
You should get the same output as "who".
EDIT
Upon further review, the addition of the path to the echo command in the variable is breaking it. Perhaps that's the source of my issue.
You can do a rotate 13 on any command you want hidden beforehand, then just have the the obfuscated command in the shell script.
This little bash script:
#!/bin/bash
function rot13 {
echo "$#" | tr '[a-m][n-z][A-M][N-Z]' '[n-z][a-m][N-Z][A-M]'
}
rot13 echo hello, world!
`rot13 rpub uryyb, jbeyq!`
Produces:
rpub uryyb, jbeyq!
hello, world!
I have a bash script that prints a line of text into a file, and then calls a second script that prints some more data into the same file. Lets call them script1.sh and script2.sh. The reason it's split into two scripts, is because I have different versions of script2.sh.
script1.sh:
rm -f output.txt
echo "some text here" > output.txt
source script2.sh
script2.sh:
./read_time >> output.txt
./run_program
./read_time >> output.txt
Variations on the three lines in script2.sh are repeated.
This seems to work most of the time, but every once in a while the file output.txt does not contain the line "some text here". At first I thought it was because I was calling script2.sh like this: ./script2.sh. But even using source the problem still occurs.
The problem is not reproducible, so even when I try to change something I don't know if it's actually fixed.
What could be causing this?
Edit:
The scripts are very simple. script1 is exactly as you see here, but with different file names. script 2 is what I posted, but then the same 3 lines repeated, and ./run_program can have different arguments. I did a grep for the output file, and for > but it doesn't show up anywhere unexpected.
The way these scripts are used is that script1 is created by a program (the only difference between the versions is the source script2.sh line. This script1.sh is then run on a different computer (linux on an FPGA actually) using ssh. Before that is done, the output file is also deleted using ssh. I don't know why, but I didn't write all of this. Also, I've checked the code running on the host. The only mention of the output file is when it is deleted using ssh, and when it is copied back to the host after the script1 is done.
Edit 2:
I finally managed to make the problem reproducible at a reasonable rate by stripping script2.sh of everything but a single line printing into the file. This also let me do the testing a bit faster. Once I had this I got the problem between 1 and 4 times for every 10 runs. Removing the command that was deleting the file over ssh before the script was run seems to have solved the problem. I will test it some more to be sure, but I think it's solved. Although I'm still not sure why it would be a problem. I thought that the ssh command would not exit before all the remove commands were executed.
It is hard to tell without seeing the real code. Most likely explanation is that you have a typo, > instead of >>, somewhere in one of the script2.sh files.
To verify this, set noclobber option with set -o noclobber. The shell will then terminate when trying to write to existing file with >.
Another possibility, is that the file is removed under certain rare conditions. Or it is damaged by some command which can have random access to it - look for commands using this file without >>. Or it is used by some command both as input and output which step on each other - look for the file used with <.
Lastly, you can have a racing condition with a command outputting to the file in background, started before that echo.
Can you grep all your scripts for 'output.txt'? What about scripts called inside read_time and run_program?
It looks like something in one of the script2.sh scripts must be either overwriting, truncating or doing a substitution on output.txt.
For example,there could be a '> output.txt' burried inside a conditional for a condition that rarely obtains. Just a guess, but it would explain why you don't always see it.
This is an interesting problem. Please post the solution when you find it!
echo: write error: Bad file descriptor
Throughout my code (through several bash scripts) I encounter this error. It happens when I'm trying to write or append to a (one) file.
LOGRUN_SOM_MUT_ANA=/Volumes/.../logRUN_SOMATIC_MUT_ANA
I use the absolute path for this variable and I use the same file for each script that is called. The file has a bunch of lines just like this. I use the import '.' on each script to get it.
echo "debug level set for $DEBUG_LEVEL" >> ${LOGRUN_SOM_MUT_ANA}
Worth noting:
It typically happens AFTER the FIRST time I write to it.
I read about files 'closing' themselves and yielding this error
I am using the above line in one script, and then calling another script.
I'd be happy to clarify anything.
For others encountering the same stupid error under cygwin in a script that works under a real Linux: no idea why, but it can happen:
1) after a syntax error in the script
2) because cygwin bash wants you to replace ./myScript.sh with . ./myScript.sh (where dot is the bash-style include directive, aka source)
I figured it out, the thumb drive I'm using is encrypted. It outputs to /tmp/ so it's a permission thing. That's the problem!
UPDATE: this is a repost of How to make shell scripts robust to source being changed as they run
This is a little thing that bothers me every now and then:
I write a shell script (bash) for a quick and dirty job
I run the script, and it runs for quite a while
While it's running, I edit a few lines in the script, configuring it for a different job
But the first process is still reading the same script file and gets all screwed up.
Apparently, the script is interpreted by loading each line from the file as it is needed. Is there some way that I can have the script indicate to the shell that the entire script file should be read into memory all at once? For example, Perl scripts seem to do this: editing the code file does not affect a process that's currently interpreting it (because it's initially parsed/compiled?).
I understand that there are many ways I could get around this problem. For example, I could try something like:
cat script.sh | sh
or
sh -c "`cat script.sh`"
... although those might not work correctly if the script file is large and there are limits on the size of stream buffers and command-line arguments. I could also write an auxiliary wrapper that copies a script file to a locked temporary file and then executes it, but that doesn't seem very portable.
So I was hoping for the simplest solution that would involve modifications only to the script, not the way in which it is invoked. Can I just add a line or two at the start of the script? I don't know if such a solution exists, but I'm guessing it might make use of the $0 variable...
The best answer I've found is a very slight variation on the solutions offered to How to make shell scripts robust to source being changed as they run. Thanks to camh for noting the repost!
#!/bin/sh
{
# Your stuff goes here
exit
}
This ensures that all of your code is parsed initially; note that the 'exit' is critical to ensuring that the file isn't accessed later to see if there are additional lines to interpret. Also, as noted on the previous post, this isn't a guarantee that other scripts called by your script will be safe.
Thanks everyone for the help!
Use an editor that doesn't modify the existing file, and instead creates a new file then replaces the old file. For example, using :set writebackup backupcopy=no in Vim.
How about a solution to how you edit it.
If the script is running, before editing it, do this:
mv script script-old
cp script-old script
rm script-old
Since the shell keep's the file open as long as you don't change the contents of the open inode everything will work okay.
The above works because mv will preserve the old inode while cp will create a new one. Since a file's contents will not actually be removed if it is opened, you can remove it right away and it will be cleaned up once the shell closes the file.
According to the bash documentation if instead of
#!/bin/bash
body of script
you try
#!/bin/bash
script=$(cat <<'SETVAR'
body of script
SETVAR)
eval "$script"
then I think you will be in business.
Consider creating a new bang path for your quick-and-dirty jobs. If you start your scripts with:
#!/usr/local/fastbash
or something, then you can write a fastbash wrapper that uses one of the methods you mentioned. For portability, one can just create a symlink from fastbash to bash, or have a comment in the script saying one can replace fastbash with bash.
If you use Emacs, try M-x customize-variable break-hardlink-on-save. Setting this variable will tell Emacs to write to a temp file and then rename the temp file over the original instead of editing the original file directly. This should allow the running instance to keep its unmodified version while you save the new version.
Presumably, other semi-intelligent editors would have similar options.
A self contained way to make a script resistant to this problem is to have the script copy and re-execute itself like this:
#!/bin/bash
if [[ $0 != /tmp/copy-* ]] ; then
rm -f /tmp/copy-$$
cp $0 /tmp/copy-$$
exec /tmp/copy-$$ "$#"
echo "error copying and execing script"
exit 1
fi
rm $0
# rest of script...
(This will not work if the original script begins with the characters /tmp/copy-)
(This is inspired by R Samuel Klatchko's answer)