Bash Script File Descriptor echo - bash

echo: write error: Bad file descriptor
Throughout my code (through several bash scripts) I encounter this error. It happens when I'm trying to write or append to a (one) file.
LOGRUN_SOM_MUT_ANA=/Volumes/.../logRUN_SOMATIC_MUT_ANA
I use the absolute path for this variable and I use the same file for each script that is called. The file has a bunch of lines just like this. I use the import '.' on each script to get it.
echo "debug level set for $DEBUG_LEVEL" >> ${LOGRUN_SOM_MUT_ANA}
Worth noting:
It typically happens AFTER the FIRST time I write to it.
I read about files 'closing' themselves and yielding this error
I am using the above line in one script, and then calling another script.
I'd be happy to clarify anything.

For others encountering the same stupid error under cygwin in a script that works under a real Linux: no idea why, but it can happen:
1) after a syntax error in the script
2) because cygwin bash wants you to replace ./myScript.sh with . ./myScript.sh (where dot is the bash-style include directive, aka source)

I figured it out, the thumb drive I'm using is encrypted. It outputs to /tmp/ so it's a permission thing. That's the problem!

Related

bash commands to remote hosts - errors with writing local output files

I'm trying to run several sets of commands in parallel on a few remote hosts.
I've created a script that constructs these commands, and then writes the output in a local file, something along the lines of:
ssh <me>#<ip1> "command" 2> ./path/to/file/newFile1.txt & ssh <me>#<ip2>
"command" 2> ./path/to/file/newFile2.txt & ssh <me>#<ip2> "command" 2>
./path/to/file/newFile3.txt; ...(same repeats itself, with new commands and new
file names)...
My issue is that, when my script runs these commands, I am getting the following errors:
bash: ./path/to/file/newFile1.txt: No such file or directory
bash: ./path/to/file/newFile2.txt: No such file or directory
bash: ./path/to/file/newFile3.txt: No such file or directory
...
These files do NOT exist but will be written. That being said, the directory paths are valid.
The strange thing is that, if I copy and paste the whole big command, then it works without any issue. I'd rather have it automated tho ;).
Any ideas?
Edit - more information:
My filesystem is the following:
- home
- User
- Desktop
- Servers
- Outputs
- ...
I am running the bash script from home/User/Desktop/Servers.
The script creates the commands that need to be run on the remote servers. First thing first, the script creates the directories where the files will be stored.
outputFolder="./Outputs"
...
mkdir -p ${outputFolder}/f{fileNumb}
...
The script then continues to create the commands that will be called on remotes hosts, and their respective outputs will be placed in the created directories.
The directories are there. Running the commands gives me the errors, however printing and then copying the commands into the same location works for some reason. I have also tried to give the full path to directory, still same issue.
Hope I've been a bit clearer.
If this is the exact error message you get:
bash: ./path/to/file/newFile1.txt: No such file or directory
Then you'll note that there's an extra space between the colon and the dot, so it's actually trying to open a file called " ./path/to/file/newFile1.txt" (without the quotes).
However, to accomplish that, you'd need to use quotes around the filename in the redirection, as in
something ... 2> " ./path/to/file/newFile1.txt"
Or the first character would have to something else than a regular space. A non-breaking space perhaps, possible something that some editor might create if you hit alt-space or such.
I don't believe you've shown enough to correctly answer the question.
This doesn't look like a problem with ssh, but the way you are calling the (ssh) commands.
You say that you are writing the commands into a file... presumably you are then running that file as a script. Could you show the code you use to do that. I believe that's your problem.
I suspect you have made a false assumption about the way the working directory changes when you run a script. It doesn't. You are listing relative paths, so its important to know what they are relative to. That is the most likely reason for it working when you copy and paste it... You are executing from a different working directory.
I am new to bash scripting and was building my script based on another one I had seen. I was "running" the command by simply calling the variable where the command was stored:
$cmd
Solved by using:
eval $cmd
instead. My bad, should have given the full script from the start.

bash "command not found" - is my PATH wrong [duplicate]

This question already has answers here:
Bash script prints "Command Not Found" on empty lines
(17 answers)
Closed 2 years ago.
I'm using bash for the first time. Wrote a code that is supposed to take "stats" as a command, but whenever I use "stats" in my command lines I kept getting the following error:
bash: stats: command not found
I googled around and a lot of people are saying this error is usually associated with PATH problems. Running "echo $PATH" yields the following results:
/bin:/sbin:/usr/local/bin:/usr/bin:/usr/local/apps/bin:/usr/bin/X11:/nfs/stak/students/z/myname/bin:.
I made sure my program started with
#!/bin/bash
Is my PATH wrong? If so, how do I fix it? If not, any suggestions on what else I should look into? Thank you all for your time and help.
It might be a PATH problem, it might be a permission problem. Some things to try.
1) If stats is not in your current directory, change directory (cd) to the directory where stats is and do
bash stats
If stats executes correctly, then you know at least that the script is OK. Otherwise, look at the script itself.
2) Try to execute the script with
./stats
If this gives
bash: ./stats: Permission denied
Then you have a permission problem. Do a
chmod a+rx stats
and retry. Note: a+rx is perhaps a bit wide; some may suggest
chmod 755 stats
is a better choice. Hint: from the comments, I see that this is one of your problems.
3) From the name of the directory, I get the impression that the file is on NFS. It might therefore be mounted as 'noexec', meaning that you cannot execute any files from that mount. You might try:
cp stats /tmp
chmod 700 /tmp/stats
/tmp/stats
4) Check the full path name for stats. If you are still in the same directory as stats, try
pwd
Check if this directory is present in the PATH. If not, add it.
export PATH=$PATH:/nfs/stak/students/z/myname/344
and try stats again.
Your PATH looks reasonable. PATH problems are a common source of errors like this, but not the only one.
The error message ": No such file or directory" suggests that the script file has DOS/Windows-style line endings (consisting of a carriage return followed by linefeed) instead of unix-style (just linefeed). Unix programs (including shells) tend to mistake the carriage return for part of the line, causing massive confusion.
In this instance, it sees the shebang line as "#!/bin/bash^M" (where "^M" indicates the carriage return), goes looking for an interpreter named "/bin/bash^M", can't find it, and prints something like "/bin/bash^M: No such file or directory". Since the carriage return makes the terminal return to the beginning of the line, ": No such file or directory" gets printed on top of the "/bin/bash" part, so it's all you see.
If you have the dos2unix program, you can use that to convert to unix-style line endings; if not, there are a variety of alternate conversion tools. But you should also figure out why the file has Windows/DOS format: did you edit it with a Windows editor, or something like that? Whatever caused it, you should make sure it doesn't happen again, because Windows/DOS format files will cause problems with most unix programs.

Simple Linux shred script error

I have an ultra-simple script that houses the Linux program shred, and it contains the parameters that have always worked from the command line (bash). Specifically 'shred -uzn 35'
The script, named D, has execute permissions set.
When I run the script, bash prints an error:
$ D some_file_to_delete
shred: missing file operand
I realize that the solution to the problem is probably as simple as the program itself. Please help?
Thanks in advance.
EDIT: The error "missing file operand" was due to the fact that the script was not set to take arguments, such as via "$#". Also, as stated in the accepted answer, I agree that an alias makes total sense for such a scenario (much more sense than, say, a script somewhere in $PATH).
Since you are using a script, not an alias, you need to pass the arguments through
shred -uzn 35 "$#"
In this case, however, I suggest you do make it an alias. In your .bashrc file, add this:
alias D='shred -uzn 35'

First line in file is not always printed in bash script

I have a bash script that prints a line of text into a file, and then calls a second script that prints some more data into the same file. Lets call them script1.sh and script2.sh. The reason it's split into two scripts, is because I have different versions of script2.sh.
script1.sh:
rm -f output.txt
echo "some text here" > output.txt
source script2.sh
script2.sh:
./read_time >> output.txt
./run_program
./read_time >> output.txt
Variations on the three lines in script2.sh are repeated.
This seems to work most of the time, but every once in a while the file output.txt does not contain the line "some text here". At first I thought it was because I was calling script2.sh like this: ./script2.sh. But even using source the problem still occurs.
The problem is not reproducible, so even when I try to change something I don't know if it's actually fixed.
What could be causing this?
Edit:
The scripts are very simple. script1 is exactly as you see here, but with different file names. script 2 is what I posted, but then the same 3 lines repeated, and ./run_program can have different arguments. I did a grep for the output file, and for > but it doesn't show up anywhere unexpected.
The way these scripts are used is that script1 is created by a program (the only difference between the versions is the source script2.sh line. This script1.sh is then run on a different computer (linux on an FPGA actually) using ssh. Before that is done, the output file is also deleted using ssh. I don't know why, but I didn't write all of this. Also, I've checked the code running on the host. The only mention of the output file is when it is deleted using ssh, and when it is copied back to the host after the script1 is done.
Edit 2:
I finally managed to make the problem reproducible at a reasonable rate by stripping script2.sh of everything but a single line printing into the file. This also let me do the testing a bit faster. Once I had this I got the problem between 1 and 4 times for every 10 runs. Removing the command that was deleting the file over ssh before the script was run seems to have solved the problem. I will test it some more to be sure, but I think it's solved. Although I'm still not sure why it would be a problem. I thought that the ssh command would not exit before all the remove commands were executed.
It is hard to tell without seeing the real code. Most likely explanation is that you have a typo, > instead of >>, somewhere in one of the script2.sh files.
To verify this, set noclobber option with set -o noclobber. The shell will then terminate when trying to write to existing file with >.
Another possibility, is that the file is removed under certain rare conditions. Or it is damaged by some command which can have random access to it - look for commands using this file without >>. Or it is used by some command both as input and output which step on each other - look for the file used with <.
Lastly, you can have a racing condition with a command outputting to the file in background, started before that echo.
Can you grep all your scripts for 'output.txt'? What about scripts called inside read_time and run_program?
It looks like something in one of the script2.sh scripts must be either overwriting, truncating or doing a substitution on output.txt.
For example,there could be a '> output.txt' burried inside a conditional for a condition that rarely obtains. Just a guess, but it would explain why you don't always see it.
This is an interesting problem. Please post the solution when you find it!

Can a shell script indicate that its lines be loaded into memory initially?

UPDATE: this is a repost of How to make shell scripts robust to source being changed as they run
This is a little thing that bothers me every now and then:
I write a shell script (bash) for a quick and dirty job
I run the script, and it runs for quite a while
While it's running, I edit a few lines in the script, configuring it for a different job
But the first process is still reading the same script file and gets all screwed up.
Apparently, the script is interpreted by loading each line from the file as it is needed. Is there some way that I can have the script indicate to the shell that the entire script file should be read into memory all at once? For example, Perl scripts seem to do this: editing the code file does not affect a process that's currently interpreting it (because it's initially parsed/compiled?).
I understand that there are many ways I could get around this problem. For example, I could try something like:
cat script.sh | sh
or
sh -c "`cat script.sh`"
... although those might not work correctly if the script file is large and there are limits on the size of stream buffers and command-line arguments. I could also write an auxiliary wrapper that copies a script file to a locked temporary file and then executes it, but that doesn't seem very portable.
So I was hoping for the simplest solution that would involve modifications only to the script, not the way in which it is invoked. Can I just add a line or two at the start of the script? I don't know if such a solution exists, but I'm guessing it might make use of the $0 variable...
The best answer I've found is a very slight variation on the solutions offered to How to make shell scripts robust to source being changed as they run. Thanks to camh for noting the repost!
#!/bin/sh
{
# Your stuff goes here
exit
}
This ensures that all of your code is parsed initially; note that the 'exit' is critical to ensuring that the file isn't accessed later to see if there are additional lines to interpret. Also, as noted on the previous post, this isn't a guarantee that other scripts called by your script will be safe.
Thanks everyone for the help!
Use an editor that doesn't modify the existing file, and instead creates a new file then replaces the old file. For example, using :set writebackup backupcopy=no in Vim.
How about a solution to how you edit it.
If the script is running, before editing it, do this:
mv script script-old
cp script-old script
rm script-old
Since the shell keep's the file open as long as you don't change the contents of the open inode everything will work okay.
The above works because mv will preserve the old inode while cp will create a new one. Since a file's contents will not actually be removed if it is opened, you can remove it right away and it will be cleaned up once the shell closes the file.
According to the bash documentation if instead of
#!/bin/bash
body of script
you try
#!/bin/bash
script=$(cat <<'SETVAR'
body of script
SETVAR)
eval "$script"
then I think you will be in business.
Consider creating a new bang path for your quick-and-dirty jobs. If you start your scripts with:
#!/usr/local/fastbash
or something, then you can write a fastbash wrapper that uses one of the methods you mentioned. For portability, one can just create a symlink from fastbash to bash, or have a comment in the script saying one can replace fastbash with bash.
If you use Emacs, try M-x customize-variable break-hardlink-on-save. Setting this variable will tell Emacs to write to a temp file and then rename the temp file over the original instead of editing the original file directly. This should allow the running instance to keep its unmodified version while you save the new version.
Presumably, other semi-intelligent editors would have similar options.
A self contained way to make a script resistant to this problem is to have the script copy and re-execute itself like this:
#!/bin/bash
if [[ $0 != /tmp/copy-* ]] ; then
rm -f /tmp/copy-$$
cp $0 /tmp/copy-$$
exec /tmp/copy-$$ "$#"
echo "error copying and execing script"
exit 1
fi
rm $0
# rest of script...
(This will not work if the original script begins with the characters /tmp/copy-)
(This is inspired by R Samuel Klatchko's answer)

Resources