Getting definition of -f Linux flag tougher than expected - shell

Terminal beginner here. I was reading through a tutorial and encountered the following command:
rm -f src/*
For my own edification, I want to know what -f does.
However, when I enter in man -f, I get the error response What manual page do you want? and when I run man f, I get the response No manual entry for f.
What's the correct way to get the definition of -f in this context from the terminal?

-f is parameter of the rm program. It doesn't have same meaning for all programs so you have to look manual page of the program. man rm in your case and it says:
f, --force
ignore nonexistent files and arguments, never prompt
For instance in tail, -f parameters means follow (output appended data as the file grows) You can learn that from tail's manual page which is man tail

-f in this context is a flag you add to rm. You'll see it documented under man rm. The relevant part of the output will show that
-f, --force
ignore nonexistent files and arguments, never prompt

Related

Using no flag options with Cobra Command?

I have a command with a default option - id. There may be other flags as well, so any of the following could be called.
cmd 81313 # 81313 is the ID
cmd -i 81313
cmd -f foo.txt 81313
cmd -i 81313 -f foo.txt
cmd 81313 -f foo.txt
What is the correct way to handle this with Cobra command?
Currently, i'm looking at the value for -i and if it's empty, reading the values from cmdArgs (if there's something in there, and doesn't have a flag, then I'm assuming it's my ID).
However, this seems more than a little error prone - e.g. what if someone puts down 2 ids.
Thanks!
i wish there was a more idiomatic way that this is done
I think there is, and it's the idea I was trying to get across in the comments: don't provide multiple ways of specifying the same option on the command line.
When choosing between a positional argument vs. an option, the question is usually "is this value the thing on which the command is operating?". For example, a command like rm operates on files, so files are specified as positional arguments: we don't need to write rm -f file1 -f file2 -f file3, etc.
Similarly, the ssh command connects to a remote host, so the hostname is a positional argument. Metadata about that connection (the identity to use, the port to use, configuration options, etc) are also specified as command line options instead.
Your question is a little hard to answer because you haven't told us what your command actually does. If the "id" is required for every invocation of cmd and if it identifies the thing upon which the command operates, you should probably just make it a positional argument:
cmd 81313
If there are ever situations in which you do not need to specify the ID, then it should be an optional argument:
cmd -i 81313
There are situations in which you may have "mandatory options", but these are relatively rare.

Shell: Expand $HOME from regular file

I'm storing commands in a file to be read and run line by line by a POSIX shell program. It looks something like this:
curl -fLo $HOME/.antigen.zsh git.io/antigen
curl -fLo $HOME/.vim/autoload/plug.vim --create-dirs https://raw.githubusercontent.com/junegunn/vim-plug/master/plug.vim
vim +"so $HOME/.vimrc" +PlugInstall +qa!
I'm also using this small function body to go through it and run every line:
while read -r line; do
$line
done < file
Simple stuff. And it works! However, I am having trouble expanding $HOME to my home directory (and ~ for that matter). I've tried using an exec subshell and removing the -r from the read loop but the curl statements create a '$HOME' directory, which is not what I want to do, I want the commands to target my /home/.\+/ directory.
Since this is a strange question and you'll probably be wondering at this point (I certainly would), this is not an XY problem. I have spent a considerable time designing this piece of software and am certain that I need to store these commands in a file for my program to work and I won't consider doing otherwise unless this is proven absolutely impossible. Also, I'm not expanding $HOME myself because I want the commands to work in other users' computers.
Any ideas? Thanks in advance!
Transferring comments into an answer.
Can you use:
sh -c "$line"
Or:
eval $line
Usually eval is regarded as dangerous, but I'm not sure that sh -c is much different. Come to think of it, why not simply execute the file storing the commands?
sh "$file"
You can use sh -e "$file" to stop on an unchecked error, and add -x to see what is being executed.

How to handle multiple warning message when we are running any linux application through bash script?

I am running a application through bash script. when we execute the script, application will start and two different warning message ask for [y/n]? for first warning i want to give "Y" and for another "N" but it should take from script only. I don't want to use any user intervention
for single warning we can handle through echo 'y' | command. but how to use for multiple warning handling? Please help
I not sure what you want, but almost for every Linux bash command there are a lot of options. For example if you want to remove a file, Linux will ask you if you really want to do that (if you use this command: #rm -i file2.txt, you will get this prompt rm: remove regular empty file `file2.txt'? y).
To ignore that prompt you can use following option for rm command -f:
#rm -f file2.txt where -f means -force
So try to do #man to see if there is an option to avoid prompt.
Pass arguments (Y and N or anything else) to the script call like this...
./script.bash Y N
These can then be accessed in the script (script.bash) with the $1 and $2 variable names. Also you can use $3 $4 .. $N etc.
For example...Script contents
#!/bin/bash
echo $1
echo $2
will return
Y
N

How do I use line output from a command as separate file names in bash?

I am trying to cleanly, without errors/quirks, open multiple files via command line in the vim text editor. I am using bash as my shell. Specifically, I want to open the first 23 files in the current working directory. My initial command was:
$ ls | head -23 | xargs vim
But when I do this, I get the following error message before all the files open:
Vim: Warning: Input is not from a terminal
and no new text is shown in the terminal after vim exits. I have to blindly do a reset in order to get a normal terminal back, apart from opening a new one.
This seems to be discussed here: using xargs vim with gnu screen, and: Why does "locate filename | xargs vim" cause strange terminal behaviour?
Since the warning occurs, xargs seems to be a no-no with vim. (Unless you do some convoluted thing using subshells and input/output redirection which I'm not too interested in as a frequent command. And using an alias/function... meh.)
The solution seemed to be to use bash's command substitution. So I tried:
$ vim $(ls | head -23)
But the files have spaces and parentheses in them, in this format:
surname, firstname(email).txt
So what the shell then does, which also is the result in the xarg case, is provide surname, and firstname(email).txt as two separate command arguments, leaving me in vim with at least twice the number of files I wanted to open, and none of the files I actually wanted to open.
So, I figure I need to escape the file names somehow. I tried to quote the command:
$ vim "$(ls | head -23)"
Then the shell concatenates the entire output from the substitution and provides that as a single command argument, so I'm left with a super-long file name which is also not what I want.
I've also tried to work with the -exec, -print, -print0 and -printf options of find, various things with arrays in bash, and probably some things I can't remember. I'm at a loss right now.
What can I do to use file names that come from a command, as separate command arguments and shell-quoted so they actually work?
Thanks for any and all help!
Here's an array-based solution:
fileset=(*)
vim "${fileset[#]:0:23}"
xargs -a <(ls | head -23) -d '\n' vim
-a tells xargs to read arguments from the named file instead of stdin. <(...) lets us pass the output of the ls/head pipeline where a filename is expected.

The -q option of bsdtar

I ran across the following code in a bash script.
# See if bsdtar can recognize the file
if bsdtar -tf "$file" -q '*' &>/dev/null; then
cmd="bsdtar"
else
continue
what did the '-q' option mean? I did not find any information in the help message of bsdtar command.
Thank you!
From the bsdtar man page:
-q (--fast-read)
(x and t mode only) Extract or list only the first archive entry
that matches each pattern or filename operand. Exit as soon as
each specified pattern or filename has been matched. By default,
the archive is always read to the very end, since there can be
multiple entries with the same name and, by convention, later
entries overwrite earlier entries. This option is provided as a
performance optimization.

Resources