Using no flag options with Cobra Command? - go

I have a command with a default option - id. There may be other flags as well, so any of the following could be called.
cmd 81313 # 81313 is the ID
cmd -i 81313
cmd -f foo.txt 81313
cmd -i 81313 -f foo.txt
cmd 81313 -f foo.txt
What is the correct way to handle this with Cobra command?
Currently, i'm looking at the value for -i and if it's empty, reading the values from cmdArgs (if there's something in there, and doesn't have a flag, then I'm assuming it's my ID).
However, this seems more than a little error prone - e.g. what if someone puts down 2 ids.
Thanks!

i wish there was a more idiomatic way that this is done
I think there is, and it's the idea I was trying to get across in the comments: don't provide multiple ways of specifying the same option on the command line.
When choosing between a positional argument vs. an option, the question is usually "is this value the thing on which the command is operating?". For example, a command like rm operates on files, so files are specified as positional arguments: we don't need to write rm -f file1 -f file2 -f file3, etc.
Similarly, the ssh command connects to a remote host, so the hostname is a positional argument. Metadata about that connection (the identity to use, the port to use, configuration options, etc) are also specified as command line options instead.
Your question is a little hard to answer because you haven't told us what your command actually does. If the "id" is required for every invocation of cmd and if it identifies the thing upon which the command operates, you should probably just make it a positional argument:
cmd 81313
If there are ever situations in which you do not need to specify the ID, then it should be an optional argument:
cmd -i 81313
There are situations in which you may have "mandatory options", but these are relatively rare.

Related

Piping output to executable invokes the executable multiple times

I'm piping a command's output to be used as arguments for an executable:
command | xargs -d '\n' "executable"
When the command yields sufficiently many lines of output I can see that the executable is run multiple times with sub-pages of the data each run. This is problematic because the state in the executable assumes that each invocation is independent from the next.
Is it possible to force the "command" to feed the entire output in a single go to the executable?
Don't use xargs, use $(...) to substitute the output into the command line.
IFS=$'\n' # this is analogous to -d '\n' in xargs
set -o noglob # prevent wildcard expansion when substituting command output
executable $(command)
However, this could get an error if the output of command is too long. xargs splits it up into multiple invocations to prevent this. But if you really require everything to be in one invocation, the error is the way to tell that this isn't possible, and prevents the incorrect results due to multiple invocations.
Is it possible to force the "command" to feed the entire output in a single go to the executable?
Yes and no.
To run the executable only once, you can use
command | bash -c 'mapfile -t a; executable "${a[#]}"'
However, this might fail if you exceed ARG_MAX of your system. A program invocation together with its arguments and environment variables must be smaller than ARG_MAX bytes. (On Linux there is even an additional restriction limiting the size of each single argument). There is no way around this.
You can check your ARG_MAX using getconf ARG_MAX or xargs --show-limits < /dev/null. This website compiled a nice list of the values on various systems.
If you are barely over the maximum and don't need environment variables, you can clear the environment to make some space.
command | env -i bash -c 'mapfile -t a; executable "${a[#]}"'
Other than that there is no way but to run executable multiple times or modify it, preferably so that it read lines from stdin instead of arguments. That way you can write
command | executable

Getting definition of -f Linux flag tougher than expected

Terminal beginner here. I was reading through a tutorial and encountered the following command:
rm -f src/*
For my own edification, I want to know what -f does.
However, when I enter in man -f, I get the error response What manual page do you want? and when I run man f, I get the response No manual entry for f.
What's the correct way to get the definition of -f in this context from the terminal?
-f is parameter of the rm program. It doesn't have same meaning for all programs so you have to look manual page of the program. man rm in your case and it says:
f, --force
ignore nonexistent files and arguments, never prompt
For instance in tail, -f parameters means follow (output appended data as the file grows) You can learn that from tail's manual page which is man tail
-f in this context is a flag you add to rm. You'll see it documented under man rm. The relevant part of the output will show that
-f, --force
ignore nonexistent files and arguments, never prompt

Argument list too long from shell script

one case:
I wrote script on tsch that invoke other script on python.
When I invoke python script from cmd , it is OK.
When I invoke test script on tsch then I get error: Argument list too long
Another case:
git grep -e "alex" -- `git ls-files | grep -v 'bin'`
I also get error: Argument list too long.
What can problem and How to solve it ?
Updated Answer
I'm not familiar with the specific git commands you are using and you don't seem to be replying sensibly to the questions in the comments either. I guess you probably want something like this though:
git ls-files | grep -v 'bin' | xargs -L 128 git grep -e "alex" --
Original Answer
The classic way to solve "error: Argument list too long" is with xargs. It can be used to repeatedly call a script whose name you provide, or echo if you don't provide one, with a limited number of arguments till the arguments are all consumed.
So, imagine you have a million files in a directory, then ls * will fail, however a simple ls will work. So, we can put that to use with:
ls | xargs -L 128
which will call echo (because we didn't provide a utility name) repeatedly with 128 filenames at a time till all are echoed.
So, you could do:
ls | xargs -L 128 yourScript.py
to call your Python script repeatedly with 128 filenames at a time. Of course you may be doing something completely different and incompatible with this technique but your answers are not very helpful so far...
for somebody comes here who need to do something like this:
./shell_script.sh param1
but it raises error Argument list too long from shell script of param1.
I just run into this, and fix it by a workaround of using the shell variable.
# calling the PARAM1 instead of $1 in code of shell_script.sh
export PARAM1=param1 ./shell_script.sh
an example of a ruby version of transferring string to nodejs:
ENV["PARAM1"]="a_bunch_of_test_string_as_longer_as_you_can"
`node node_script.sh`
var param1 = process.env.PARAM1;
console.log(param1);

How to handle multiple warning message when we are running any linux application through bash script?

I am running a application through bash script. when we execute the script, application will start and two different warning message ask for [y/n]? for first warning i want to give "Y" and for another "N" but it should take from script only. I don't want to use any user intervention
for single warning we can handle through echo 'y' | command. but how to use for multiple warning handling? Please help
I not sure what you want, but almost for every Linux bash command there are a lot of options. For example if you want to remove a file, Linux will ask you if you really want to do that (if you use this command: #rm -i file2.txt, you will get this prompt rm: remove regular empty file `file2.txt'? y).
To ignore that prompt you can use following option for rm command -f:
#rm -f file2.txt where -f means -force
So try to do #man to see if there is an option to avoid prompt.
Pass arguments (Y and N or anything else) to the script call like this...
./script.bash Y N
These can then be accessed in the script (script.bash) with the $1 and $2 variable names. Also you can use $3 $4 .. $N etc.
For example...Script contents
#!/bin/bash
echo $1
echo $2
will return
Y
N

scala process with spaces not working correctly

I have a scala process command like below to use linux bash egrep command. But the search results are not the same in terminal and in my scala generated file. Scala results contain everything that has "new" and "Exception" while I want the output to contains only lines having "new Exception". Am I missing something here? Please help
if (("egrep -r -I -n -E \"*new Exception*\" /mysource/" #|
"grep -v .svn").! == 0) {
out.println(("egrep -r -I -n -E \"*new Exception*\" /mysource/" #|
"grep -v .svn").!!)
}
The docs say (under "What to Run and How"): Implicitly, each process is created either out of a String, with arguments separated by spaces -- no escaping of spaces is possible -- or out of a scala.collection.Seq, where the first element represents the command name, and the remaining elements are arguments to it. In this latter case, arguments may contain spaces
So, apparently if you need to pass the command line a single argument with spaces, like new Exception, you need to create the process builder from a Seq instead of a single String.

Resources