My environments:
CentOS 6.5
bash 4.1.2(1)
Sometimes when I intend to add something to a file,
instead of
$ echo "xxx" >> mymemo.txt
I type mistakenly
$ echo "xxx" > mymemo.txt
resulting in loosing memos in mymemo.txt.
I am wondering if there is a way to prohibit to use redirection (>), but allow to use redirection (>>)?
You can use set -o noclobber in your .bashrc or .profile
If set, bash prevents you from overwriting existing files when redirecting.
mint#mint ~ $ echo "foo" > test
bash: test: cannot overwrite existing file
Related
I want to create a script which can be run both in verbose and silent mode if preferred. My idea was to create a variable which contains "&> /dev/null" and if the script is runned silently I just clear it. The silent mode works fine but if I want to pass this as the last "argument" to something (see my example down) it does not work.
Here is an example:
I want to zip something (I know there is a -q option, the question is more like theorethical) and if I write this it works as I intended:
$ zip bar.zip foo
adding: foo (stored 0%)
$ zip bar.zip foo &> /dev/null
$
But when I declare the &> /dev/null as a variable I get this:
$ var="&> /dev/null"
$ zip bar.zip foo "$var"
zip warning: name not matched: &> /dev/null
updating: foo (stored 0%)
I saw other solutions to redirect parts of the script by intention but I'm curious if my idea can work and if not, why not?
It would be more convenient to use something like this in my script because I only need to redirect some lines or commands but too much to surround each with an if:
I'm trying something like this to be clear:
if verbose
verbose="&> /dev/null"
else
verbose=
command1 $1 $2
command2 $1
command3 $1 $2 $3 "$verbose"
command4
bash will recognize a file name of /dev/stdout as standard output, when used with a redirection, whether or not your file system has such an entry.
# If an argument is given your script, use that value
# Otherwise, use /dev/stdout
out=${1:-/dev/stdout}
zip bar.zip foo > "$out"
Your issue is the order of expansion performed by the shell. The variable is replaced by your stdout redirect AFTER passing the commands to the command zip
You need the shell to expand that variable BEFORE invoking zip command. TO do this you can use eval which takes a first pass at expanding variables and such.
eval zip bar.zip foo "$var"
However be weary of using eval, as unchecked values can lead to exploits, and its use is generally discouraged.
You could consider setting up your own file descriptor and set that to be /dev/null, a real file, or even stdout. ANd then each command that should go to one of those would be:
zip bar.zip foo > 3
http://www.tldp.org/LDP/abs/html/io-redirection.html#FDREF
I designed a custom script to grep a concatenated list of .bash_history backup files. In my script, I am creating a temporary file with mktemp and saving it to a variable temp. Next, I am redirecting output to that file using the cat command.
Is there a means to create a temporary file (using mktemp), redirect output to it, then store it in a variable in one command, while preserving newline characters?
The below snippet of code works just fine, but I have a feeling there is a more terse and canonical way to achieve this in one line – maybe using process substitution or something of the like.
# Concatenate all .bash_history files into a temporary file `temp`.
temp="$(mktemp)"
cat "$HOME/.bash_history."* > $temp
trap 'rm -f $temp' 0
# Set `HISTFILE` shell variable to the `temp` file.
HISTFILE="$temp"
keyword="$1"
# Search for `keyword` using the `history` command
if [[ "$keyword" ]]; then
# Enable history
set -o history
history | grep "$keyword"
# Disable history
set +o history
else
echo -e "usage: search <keyword>"
exit 0
fi
If you're comfortable with the side effect of making the assignment conditional on tempfile not previously having a nonempty value, this is straightforward via the ${var:=value} expansion:
cat "$HOME/.bash_history" >"${tempfile:=$(mktemp)}"
cat myfile.txt | f=`mktemp` && cat > "${f}"
I guess there is more than one way to do it. I found following to be working for me:
cat myfile.txt > $(echo "$(mktemp)")
Also don't forget about tee:
cat myfile.txt | tee "$(mktemp)" > /dev/null
If I echo a string which contains the $(hostname) command, then it works fine. For example, run in terminal:
echo "http://$(hostname)/main.html"
http://artur/main.html
But if I get that string from a file (previously doing cat to a variable) when I try to print with echo, it does not works:
$ cat site
http://$(hostname)/main.html
$ mysite=$(cat site)
$ echo $mysite
http://$(hostname)/main.html
What am I doing wrong? Any idea?
The shell performs variable expansion and the shell performs command substitution but it does not do them recursively:
The result of a variable expansion will not be subjected to any further variable expansion or command substitution.
The result of a command substitution will not be subjected to any further variable expansion or command substitution.
This is a good thing because, if it was done recursively, there would be all manor of unexpected results and security issues.
One solution is to use eval. This is a generally dangerous approach and should be regarded as a last resort.
The safe solution is to rethink what you want. In particular, when do you want hostname evaluated? Should it be evaluated before the site file is created? Or, should it be evaluated when the code is finally run? The latter appears to be what you want.
If that is the case, consider this approach. Put %s in the site file where you want the host name to go:
$ cat site
http://%s/main.html
When you read the site file, use printf to substitute in the host name:
$ mysite=$(printf "$(cat site)" "$(hostname)")
$ echo "$mysite"
http://artur/main.html
With bash, an alternative for the form for the above is:
$ printf -v mysite "$(< site)" "$HOSTNAME"
$ echo "$mysite"
http://artur/main.html
If you only need the placeholder 'hostname' to be replaced by actual hostname, you could use either of these methods:
With your existing site file:
$ cat site
http://$(hostname)/main.html
$ mysite=$(sed 's|\$(hostname)|'"$(hostname)"'|g' site) # or
$ mysite=$(awk '{print gensub("\\$\\(hostname\\)",hostname,"g", $0);}' "hostname=$(hostname)" site)
$ # Note: Above `sed` based method has risk of shell injection. `awk` option is safer.
$ echo "$mysite"
http://artur/main.html
Or if you don't mind changing your site file contents, there is a tool, which is typically used for such purposes - to replace placeholders by actual values: m4 - macro processor
$ cat site
http://HOSTNAME/main.html
$ # HOSTNAME is the placeholder here; can be any identifier string.
$ mysite=$(m4 -D "HOSTNAME=$(hostname)" site)
$ echo "$mysite"
http://artur/main.html
The bash manual page states
If the shell is started with the effective user (group) id not equal to
the real user (group) id, [...] the SHELLOPTS, BASHOPTS, CDPATH, and
GLOBIGNORE variables if they appear in the environment, are ignored
So normally this happens.
> export GLOBIGNORE='*t*'
> echo *
afile
> bash -i
>> # look, the variable is passed through
>> $ echo $GLOBIGNORE
*t*
>> # but to no effect
>> $ echo *
afile anotherfile athirdfile
I do not think it would make much sense to fake the real user id to enable passing GLOBIGNORE and a number of other unwanted side-effects.
Is it possibile to make the subshell respect an exported GLOBIGNORE?
Some other shell hacks may come to the rescue. All these solutions require at least to modify the shell invocation, but make the subshell start readily prepared.
As shell startup is different on interactive shells, two strategies are needed.
Interactive
When starting an interactive session, bash normally sources the default ~/.bashrc file. There is a switch to change where to look for this file. This can be exploited without loss as long as the script passed in there redirects to the original location.
> echo 'GLOBIGNORE=*t*' > rc
> echo 'source ~/.bashrc' >> rc
> bash --rcfile rc -i
>> echo *
Non-Interactive, Modifyable Command String
As Cyrus already pointed out, one could simply augment the command with the assignment so that it happens inside the subshell to begin with.
> bash -c 'GLOBIGNORE="*t*" ; echo *'
Fully Automated
If modification of the passed commands should be avoided, another special variable can be employed. It is called BASH_ENV and denotes a script to source when starting up a non-interactive session. With this, a strategy similar to --rcfile arises.
> echo 'GLOBIGNORE=*t*' > rc
> BASH_ENV=rc bash -c "echo *"
Or, to be even more sleazy and avoid the temporary file rc, we can force piping, which is clearly not intended as the value - is not regarded as the standard input.
> echo 'GLOBIGNORE=*t*' | BASH_ENV=/dev/stdin bash -c "echo *"
In this article, How To Create a Patch File for a RPM, there is this command:
diff -ru base-1.4.4-orig base-1.4.4 >| $HOME/rpmbuild/SOURCES/base-1.4.4-f12.patch
Since the output is written to a file, the simple redirection operator > works fine for me.
Does this operator mean redirect to a pipe? If so, how is a redirect to a pipe different from just a redirect to a file or just a pipe to a process?
By executing the command
set -o noclobber
or the equivalent
set -C
you can cause bash to refuse to write to existing files when redirecting output.
Using >| rather than > overrides that setting.
References:
The set builtin
Redirecting Output
Or run info bash (assuming it's installed on your system) and search for >|:
s>\|
(If you're familiar with csh and/or tcsh, bash's >| (greater-than vertical-bar) is similar to csh's >! (greater-than exclamation-mark).
From the bash manpage:
If the redirection operator is >|, or the redirection
operator is > and the noclobber option to the set builtin command is not enabled, the redirection > is attempted even if the file named
by word exists.