Practical way to combine conflicting options - algorithm

I have a script which accepts multiple options as arguments. The list of valid options may be large (>20) which could result with conflicts. Is there a practical way to combine the conflicted ones together, and non-conflicted into another without creating multiple lists, groups, etc. Changes to one group will result in changes to other groups.
For example, the list of available options:-a, -b, -c, -d
The following options conflict: [-a, -c], [-a, -d]
The following options don't conflict: [-a, -b], [-c, -d]
EDIT, A more precise example:
For example, the script allows to start/stop a specific task and has additional options for creating/deleting logs.
A normal start would look like :
./script -start Task -logFile C:\out.tmp
And the script should notify the user in case something like:
./script -start Task -stop Task is executed, since start and stop are two opposite actions.
Another conflicting action:
./sript -start Task -logFile C:\out.tmp -deleteLog C:\out.tmp , which would create a log file and delete it at the same time
Now, if the options are start, stop, logFile, deleteLog,
The following would be conflicting: [start, stop], [logFile, deleteLog]
The following would not be conflicting: [start, logFile], [stop, deleteLog]

Let us assume that, as you are writing the source, you can decide how to organize your arguments for parsing time. For example (using JSON notation, you can easily adapt this to C structs, Java enums or what-have-you), you can annotate the available options to explicitly indicate which of them conflict:
const options = [
{
name: "start",
description: "starts foobaring the fizzbuzz",
parameters: [
{
name: "task",
type: "string",
optional: false,
description: "the type of task to foobar"
}
]
conflicts: ["stop"] // <-- explicit simple conflict detection
},
{ ... }
]
This would be used by a command-line parsing module to
generate a nice help screen, possibly including conflicting options
return a map of command-line options, say parsedArgs, so that parsedArgs['start'] would correspond to the parameters of the start parameter if it was specified.
detect simple conflicts, by complaining if conflicting options are detected at parse-time.
Note that there may be additional conflicts that it may not be worthwhile to detect at the parsing stage. For instance, if the value of option foo must be larger than bar + baz, it is better to code a check for this fact after parsing, rather than complicating the parser to handle arbitrary relationships between option values.

Related

Golang Cobra multiple flags with no value

I'm new to Golang, and i'm trying out my first CLI application, using the Cobra framework.
My plan is to have few commands, with many flags.
These flags, don't have to have a value attached to them, since they can simply be -r to restart the device.
Currently, i have the following working, but i keep thinking, that this cannot be the correct way to do it.
So any help is appreciated.
The logic is currently, that each command, get's a default value attached to it, and then i look for this, in the run command, and triggers my function, once it captures it.
My "working code" looks like below.
My init function, in the command contains the following.
chargerCmd.Flags().StringP("UpdateFirmware", "u", "", "Updeates the firmware of the charger")
chargerCmd.Flags().Lookup("UpdateFirmware").NoOptDefVal = "yes"
chargerCmd.Flags().StringP("reboot", "r", "", "Reboots the charger")
chargerCmd.Flags().Lookup("reboot").NoOptDefVal = "yes"
And the run section looks like this.
Run: func(cmd *cobra.Command, args []string) {
input, _ := cmd.Flags().GetString("UpdateFirmware")
if input == "yes" {
fmt.Println("Updating firmware")
UpdateFirmware(os.Getenv("Test"), os.Getenv("Test2"))
}
input, _ = cmd.Flags().GetString("reboot")
if input == "yes" {
fmt.Println("Rebooting Charger")
}
},
Maybe to make the usage a bit cleaner, as stated in the comment from Burak - you can better differentiate between commands and flags. With cobra you have the root command and sub-commands attached to the root command. Additionaly each command can accept flags.
In your case, charger is the root commands and you want two sub-commands: update_firmware and reboot.
So as an example to reboot the charger, you would execute the command:
$ charger reboot
In the code above, you are trying to define sub-commands as flags, which is possible, but likely not good practice.
Instead, the project should be set-up something like this: https://github.com/hesamchobanlou/stackoverflow/tree/main/74934087
You can then move the UpdateFirmware(...) operation within the respective command definition under cmd/update_firmware.go instead of trying to check each flag variation on the root chargerCmd.
If that does not help, provide some more details on why you think your approach might not be correct?

Why are optional inputs on my Gradle custom task not working?

I have a build.gradle with the following contents:
task myTask {
inputs.file("input.txt").optional()
doLast { println "input.txt exists = " + file("input.txt").exists() }
}
If input.txt doesn't exist, it fails with:
File '/Users/skissane/testgradle/input.txt' specified for property '$1' does not exist.
What I am trying to do, is run a custom script–which is written in Groovy, and runs inside the Gradle build under doLast, not as an external process–which takes the input.txt file as input, and the script's behaviour and output will change based on what is in that input file. But it is an optional input file – the script will still generate output (albeit different output) even if the input file doesn't exist.
Things I have tried so far:
Remove .optional(), change it to .optional(true): no difference in results
Instead of .optional(), wrap it in if (file("input.txt").exists()) {: this works, but seems ugly. Why doesn't .optional() work?
Have I misunderstood what .optional() is meant to do? Because another answer suggests it is the right way to solve my problem, but it isn't working.
(I am using Gradle 6.8.3. I tried upgrading to the latest Gradle 7.2, the same problem occurs, although 7.2 has more detailed error messages.)
optional() can't be used to mark the file itself as optional. optional() just means that the input property is optional, and the task is still valid if no files at all are specified; but if a file is specified, it must exist.
As such, optional() isn't really useful in this kind of custom task declared directly in build.gradle. It is really intended for defining new task types in plugins, when one defines a new task input property other than inputs, and wants to make it optional to declare files for that property. It is the property itself which is made optional, not the files in it. On a custom task, declaring inputs as optional is pointless because it is already optional to begin with.
Right now (as of version 7.2), Gradle doesn't have any way to mark a file as an optional input, other than through if (file("input.txt").exists()) {. Hopefully they might add that feature in some future Gradle version.
(Thanks to James Justinic who answered my post about this on Gradle forums.)

Making jenkins parallel blocks more comprehensible by humans

Jenkins parallel blocks are great but they do raise the bar for human comprehension as they interleave output.
def mysteps = [:]
mysteps['something'] = { sh "do-something.sh" }
if (wantOtherThing) {
mysteps['otherthing'] = { sh "do-otherthing.sh" }
}
parallel mysteps
This executes creating console output like so:
[something] ...
[something] ...
[otherthing] ...
[something] ...
...
The case above offers a simple option - redirect output to a log file and cat that to the log later. If I used a series of jenkins plugins & tasks (e.g. ansible-playbook task) then un-interleaving the output is more of a challenge. In that case the only option seems to be create specific log files and store them as build outputs.
Is there another approach to keeping the console spartan, comprehensible while still maintaining:
somewhat dynamic console so folks can watch the build
enough debug information so we can tell why a job failed?
If you look at the output in Blue Ocean, it separates the output for each parallel task

What is the equivalent of an Ant taskdef in Gradle?

I've been struggling with this for a day and a half or so. I'm trying to replicate the following Ant concept in Gradle:
<target name="test">
...
<runexe name="<filename> params="<params>" />
...
</target>
where runexe is declared elsewhere as
<macrodef name="runexe" >
...
</macrodef>
and might also be a taskdef or a scriptdef i.e. I'd like to be able to call a reusable, pre-defined block of code and pass it the necessary parameters from within Gradle tasks. I've tried many things. I can create a task that runs the exe without any trouble:
task runexe(type: Exec){
commandLine 'cmd', '/c', 'dir', '/B'
}
task test(dependsOn: 'runexe') {
runexe {
commandLine 'cmd', '/c', 'dir', '/N', 'e:\\utilities\\'
}
}
test << {
println "Testing..."
// I want to call runexe here.
...
}
and use dependsOn to have it run. However this doesn't allow me to run runexe precisely when I need to. I've experimented extensively with executable, args and commandLine. I've played around with exec and tried several different variations found here and around the 'net. I've also been working with the free books available from the Gradle site.
What I need to do is read a list of files from a directory and pass each file to the application with some other arguments. The list of files won't be known until execution time i.e. until the script reads them, the list can vary and the call needs to be made repeatedly.
My best option currently appears to be what I found here, which may be fine, but it just seems that there should be a better way. I understand that tasks are meant to be called once and that you can't call a task from within another task or pass one parameters but I'd dearly like to know what the correct approach to this is in Gradle. I'm hoping that one of the Gradle designers might be kind enough to enlighten me as this is a question asked frequently all over the web and I'm yet to find a clear answer or a solution that I can make work.
If your task needs to read file names, then I suggest to use the provided API instead of executing commands. Also using exec will make it OS specific, therefore not necessarily portable on different OS.
Here's how to do it:
task hello {
doLast {
def tree = fileTree(dir: '/tmp/test/txt')
def array = []
tree.each {
array << it
print "${it.getName()} added to array!\n"
}
}
}
I ultimately went with this, mentioned above. I have exec {} working well in several places and it seems to be the best option for this use case.
To please an overzealous moderator, that means this:
def doMyThing(String target) {
exec {
executable "something.sh"
args "-t", target
}
}
as mentioned above. This provides the same ultimate functionality.

Get autocompletion list in bash variable

I'm working with a big software project with many build targets. When typing make <tab> <tab> it shows over 1000 possible make targets.
What I want is a bash script that filters those targets by certain rules. Therefore I would like to have this list of make targets in a bash variable.
make_targets=$(???)
[do something with make_targets]
make $make_targets
It would be best if I wouldn't have to change anything with my project.
How can I get such a List?
#yuyichao created a function to get autocomplete output:
comp() {
COMP_LINE="$*"
COMP_WORDS=("$#")
COMP_CWORD=${#COMP_WORDS[#]}
((COMP_CWORD--))
COMP_POINT=${#COMP_LINE}
COMP_WORDBREAKS='"'"'><=;|&(:"
# Don't really thing any real autocompletion script will rely on
# the following 2 vars, but on principle they could ~~~ LOL.
COMP_TYPE=9
COMP_KEY=9
_command_offset 0
echo ${COMPREPLY[#]}
}
Just run comp make '' to get the results, and you can manipulate that. Example:
$ comp make ''
test foo clean
You would need to overwrite / modify the completion function for make. On Ubuntu it is located at:
/usr/share/bash-completion/completions/make
(Other distributions may store the file at /etc/bash_completion.d/make)
If you don't want to change the completion behavior for the whole system, you might write a small wrapper script like build-project, which calls make. Then write a completion function for that mapper which is derived from make's one.

Resources