shell >& operator? - bash

I have a question about what I think is an operator or argument passer but google hasn't turned up anything. The script this is contained in is
#!/bin/sh
ln mopac.in FOR005
mopac >& FOR006
mv FOR006 mopac.out
When I call "mopac mopac.in", the program runs fine, but, for my needs, mopac is called within another program by using this script, but it seems like the input file is failing to pass so mopac is not running. I don't understand what the ">&" is supposed to do so I am having problems troubleshooting.
Thanks.

>& FILE is deprecated bash (from csh) shorthand for > FILE 2>&1, that is, redirect both standard output and standard error. (If /bin/sh is not bash, as is true on a number of Linux distributions, this will elicit an error.) Older bash (before 3.0) preferred this form, so most newer bash still understand it, although possibly very recent bash has finally removed it as they seem to finally be removing deprecated constructs of late.
Your script there is not passing mopac.in at all, but appears to be assuming that mopac will read its input from FOR005, so uses ln to make it available there. Perhaps you should change the script to read mopac.in as a parameter, just as you're running it directly.

Explanation here : http://tldp.org/LDP/abs/html/io-redirection.html
>&j
# Redirects, by default, file descriptor 1 (stdout) to j.
# All stdout gets sent to file pointed to by j.

Related

bash script overrides hard coded variables in executed second script

I'm calling Uncle. I'm attempting to manipulate variables that have hard coded values in a second bash script I am calling. I have no control over the script and am building a wrapper around it to adjust some build behavior before it finally kicks off a yocto build. I'm not sure what else to try after reading and trying numerous examples.
Examples of the situation:
build.sh calls build2.sh
IS_DEV=1 ./build2.sh #trying to override value
build2.sh
IS_DEV=0 # hardcoded value
echo $IS_DEV
# always results in 0.
I have also tried export IS_DEV=1 before calling build2.sh.
I'm sure this is pretty simple, but I cannot seem to get this to work. I appreciate any assistance. Is this possible? I'm using GNU bash, version 4.3.48(1)-release (x86_64-pc-linux-gnu) on Ubuntu 16.04.4 LTS.
Oh, I have also tried the sourcing technique with no luck.
IS_DEV=1 . ./build2.sh
IS_DEV=1 source ./build2.sh
Where am I getting this wrong?
Much appreciated.
If you can't modify the script, execute a modified version of it.
sed 's/^IS_DEV=0 /IS_DEV=1 /' build2.sh | sh
Obviously, pipe to bash if you need Bash semantics instead of POSIX sh semantics.
If the script really hard-codes a value with no means to override it from the command line, modifying that script is the only possible workaround. But the modification can be ephemeral; the above performs a simple substitution on the script, then passes the modified temporary copy through a pipe to a new shell instance for execution. The modification only exists in the pipeline, and doesn't affect the on-disk version of build2.sh.

Bash script - run process & send to background if good, or else

I need to start up a Golang web server and leave it running in the background from a bash script. If the script in question in syntactically correct (as it will be most of the time) this is simply a matter of issuing a
go run /path/to/index.go &
However, I have to allow for the possibility that index.go is somehow erroneous. I should explain that in Golang this for something as "trival" as importing a module that you then fail to use. In this case the go run /path/to/index.go bit will return an error message. In the terminal this would be something along the lines of
index.go:4:10: expected...
What I need to be able to do is to somehow change that command above so I can funnel any error messages into a file for examination at a later stage. I tried variants on go run /path/to/index.go >> errors.txt with the terminating & in different positions but to no avail.
I suspect that there is a bash way to do this by altering the priority of evaluation of the command via some judiciously used braces/brackets etc. However, that is way beyond my bash capabilities. I would be most obliged to anyone who might be able to help.
Update
A few minutes later... After a few more experiments I have found that this works
go run /path/to/index.go &> errors.txt &
Quite apart from the fact that I don't in fact understand why it works there remains the issue that it produces a 0 byte errors.txt file when the command goes to completion without Golang throwing up any error messages. Can someone shed light on what is going on and how it might be improved?
Taken from man bash.
Redirecting Standard Output and Standard Error
This construct allows both the standard output (file descriptor 1) and the standard error output (file descriptor 2) to be redirected to the file whose name is the expansion of word.
There are two formats for redirecting standard output and standard error:
&>word
and
>&word
Of the two forms, the first is preferred. This is semantically equivalent to
>word 2>&1
Appending Standard Output and Standard Error
This construct allows both the standard output (file descriptor 1) and the standard error output (file descriptor 2) to be appended to the file whose name is the expansion of word.
The format for appending standard output and standard error is:
&>>word
This is semantically equivalent to
>>word 2>&1
Narūnas K's answer covers why the &> redirection works.
The reason why the file is created anyway is because the shell creates the file before it even runs the command in question.
You can see this by trying no-such-command > file.out and seeing that even though the shell errors because no-such-command doesn't exist the file gets created (using &> on that test will get the shell's error in the file).
This is why you can't do things like sed 'pattern' file > file to edit a file in place.

Are shell scripts read in their entirety when invoked?

I ask because I recently made a change to a KornShell (ksh) script that was executing. A short while after I saved my changes, the executing process failed. Judging from the error message, it looked as though the running process had seen some -- but not all -- of my changes. This strongly suggests that when a shell script is invoked, the entire script is not read into memory.
If this conclusion is correct, it suggests that one should avoid making changes to scripts that are running.
$ uname -a
SunOS blahblah 5.9 Generic_122300-61 sun4u sparc SUNW,Sun-Fire-15000
No. Shell scripts are read either line-by-line, or command-by-command followed by ;s, with the exception of blocks such as if ... fi blocks which are interpreted as a chunk:
A shell script is a text file containing shell commands. When such a
file is used as the first non-option argument when invoking Bash, and
neither the -c nor -s option is supplied (see Invoking Bash), Bash
reads and executes commands from the file, then exits. This mode of
operation creates a non-interactive shell.
You can demonstrate that the shell waits for the fi of an if block to execute commands by typing them manually on the command line.
http://www.gnu.org/software/bash/manual/bashref.html#Executing-Commands
http://www.gnu.org/software/bash/manual/bashref.html#Shell-Scripts
It's funny that most OS'es I know, do NOT read the entire content of any script in memory, and run it from disk. Doing otherwise would allow making changes to the script, while running. I don't understand why that is done, given the fact :
scripts are usually very small (and don't take many memory anyway)
at some point, and shown in this thread, people would start making changes to a script that is already running anyway
But, acknowledging this, here's something to think about: If you decided that a script is not running OK (because you are writing/changing/debugging), do you care on the rest of the running of that script ? you can go ahead making the changes, save them, and ignore all output and actions, done by the current run.
But .. Sometimes, and that depends on the script in question, a subsequent run of the same script (modified or not), can become a problem since the current/previous run is doing an abnormal run. It would typically skip some stuff, or sudenly jump to parts in the script, it shouldn't. And THAT may be a problem. It may leave "things" in a bad state; particularly if file manipulation/creation is involved.
So, as a general rule : even if the OS supports the feature or not, it's best to let the current run finish, and THEN save the updated script. You can change it already, but don't save it.
It's not like in the old days of DOS, where you actually have only one screen in front of you (one DOS screen), so you can't say you need to wait on run completion, before you can open a file again.
No they are not and there are many good reasons for that.
One of the things you should keep in mind is that a shell is not an interpreter even if there are some similarities. Shells are designed to work with a stream of commands. Either from the TTY ,a PIPE, FIFO or even a socket.
The shell reads from its resource line by line until a EOF is returned by the kernel.
The most shells have no extra support for interpreting files. they work with a file as they would work with a terminal.
In fact this is considered to be a nice feature because you can do interesting stuff like this How do Linux binary installers (.bin, .sh) work?
You can use a binary file and prepend shell scripts. You can't do this with an interpreter. because it parses the whole file or at least it would try it and fail. A shell would just interpret it line by line and doesnt care about the garbage at the end of the file. You just have to make sure the execution of the script gets terminated before it reaches the binary part.

Do some programs not accept process substitution for input files?

I'm trying to use process substitution for an input file to a program, and it isn't working. Is it because some programs don't allow process substitution for input files?
The following doesn't work:
bash -c "cat meaningless_name"
>sequence1
gattacagattacagattacagattacagattacagattacagattacagattaca
>sequence2
gattacagattacagattacagattacagattacagattacagattacagattaca
bash -c "clustalw -align -infile=<(cat meaningless_name) -outfile=output_alignment.aln -newtree=output_tree.dnd"
(Less verbose output, finishing with:
No sequences in file. No alignment!
But the following controls do work:
bash -c "clustalw -align -infile=meaningless_name -outfile=output_alignment.aln -newtree=output_tree.dnd"
(Verbose output, finishing with:
CLUSTAL-Alignment file created [output_alignment.aln]
bash -c "cat <(cat meaningless_name) > meaningless_name2"
diff meaningless_name meaningless_name2
(No output: the two files are the same)
bash -c "clustalw -align -infile=meaningless_name2 -outfile=output_alignment.aln -newtree=output_tree.dnd"
(Verbose output, finishing with:
CLUSTAL-Alignment file created [output_alignment.aln]
Which suggest that process substitution itself works, but that the clustalw program itself doesn't like process substitution - perhaps because it creates a non-standard file, or creates files with an unusual filename.
Is it common for programs to not accept process substitution? How would I check whether this is the issue?
I'm running GNU bash version 4.0.33(1)-release (x86_64-pc-linux-gnu) on Ubuntu 9.10. Clustalw is version 2.0.10.
Process substitution creates a named pipe. You can't seek into a named pipe.
Yes. I've noticed the same thing in other programs. For instance, it doesn't work in emacs either. It gives "File exists but can not be read". And it's definitely a special file, for me /proc/self/fd/some_number. And it doesn't work reliably in either less nor most, with default settings.
For most:
most <(/bin/echo 'abcdef')
and shorter displays nothing. Longer values truncate the beginning. less apparently works, but only if you specify -f.
I find zsh's = much more useful in practice. It's syntactically the same, except = instead of <. But it just creates a temporary file, so support doesn't depend on the program.
EDIT:
I found zsh uses TMPPREFIX to choose the temporary filename. So even if you don't want your real /tmp to be tmpfs, you can mount one for zsh.

Can a shell script indicate that its lines be loaded into memory initially?

UPDATE: this is a repost of How to make shell scripts robust to source being changed as they run
This is a little thing that bothers me every now and then:
I write a shell script (bash) for a quick and dirty job
I run the script, and it runs for quite a while
While it's running, I edit a few lines in the script, configuring it for a different job
But the first process is still reading the same script file and gets all screwed up.
Apparently, the script is interpreted by loading each line from the file as it is needed. Is there some way that I can have the script indicate to the shell that the entire script file should be read into memory all at once? For example, Perl scripts seem to do this: editing the code file does not affect a process that's currently interpreting it (because it's initially parsed/compiled?).
I understand that there are many ways I could get around this problem. For example, I could try something like:
cat script.sh | sh
or
sh -c "`cat script.sh`"
... although those might not work correctly if the script file is large and there are limits on the size of stream buffers and command-line arguments. I could also write an auxiliary wrapper that copies a script file to a locked temporary file and then executes it, but that doesn't seem very portable.
So I was hoping for the simplest solution that would involve modifications only to the script, not the way in which it is invoked. Can I just add a line or two at the start of the script? I don't know if such a solution exists, but I'm guessing it might make use of the $0 variable...
The best answer I've found is a very slight variation on the solutions offered to How to make shell scripts robust to source being changed as they run. Thanks to camh for noting the repost!
#!/bin/sh
{
# Your stuff goes here
exit
}
This ensures that all of your code is parsed initially; note that the 'exit' is critical to ensuring that the file isn't accessed later to see if there are additional lines to interpret. Also, as noted on the previous post, this isn't a guarantee that other scripts called by your script will be safe.
Thanks everyone for the help!
Use an editor that doesn't modify the existing file, and instead creates a new file then replaces the old file. For example, using :set writebackup backupcopy=no in Vim.
How about a solution to how you edit it.
If the script is running, before editing it, do this:
mv script script-old
cp script-old script
rm script-old
Since the shell keep's the file open as long as you don't change the contents of the open inode everything will work okay.
The above works because mv will preserve the old inode while cp will create a new one. Since a file's contents will not actually be removed if it is opened, you can remove it right away and it will be cleaned up once the shell closes the file.
According to the bash documentation if instead of
#!/bin/bash
body of script
you try
#!/bin/bash
script=$(cat <<'SETVAR'
body of script
SETVAR)
eval "$script"
then I think you will be in business.
Consider creating a new bang path for your quick-and-dirty jobs. If you start your scripts with:
#!/usr/local/fastbash
or something, then you can write a fastbash wrapper that uses one of the methods you mentioned. For portability, one can just create a symlink from fastbash to bash, or have a comment in the script saying one can replace fastbash with bash.
If you use Emacs, try M-x customize-variable break-hardlink-on-save. Setting this variable will tell Emacs to write to a temp file and then rename the temp file over the original instead of editing the original file directly. This should allow the running instance to keep its unmodified version while you save the new version.
Presumably, other semi-intelligent editors would have similar options.
A self contained way to make a script resistant to this problem is to have the script copy and re-execute itself like this:
#!/bin/bash
if [[ $0 != /tmp/copy-* ]] ; then
rm -f /tmp/copy-$$
cp $0 /tmp/copy-$$
exec /tmp/copy-$$ "$#"
echo "error copying and execing script"
exit 1
fi
rm $0
# rest of script...
(This will not work if the original script begins with the characters /tmp/copy-)
(This is inspired by R Samuel Klatchko's answer)

Resources