Create a list of string variable in makefile - makefile

FILES = Y_4_X_0.log \
Y_4_X_1.log \
Y_4_X_2.log \
Y_5_X_0.log \
Y_5_X_1.log \
Y_5_X_2.log
Say I want to create a variable like this. Is there a clean/readable way of doing this without using addsuffix, addprefix, etc?

There's not really enough information here to know what you want to do and what the requirements are (does "clean/readable" mean "without using any functions", or is there something about addsuffix and addprefix specifically that you don't like?)
Anyway, something like this will work for the small example you provide:
EXTS := 0 1 2
BASES := 4 5
FILES := $(foreach B,$(BASES),$(foreach E,$(EXTS),Y_$B_X_$E.log))

Related

Golang Cobra multiple flags with no value

I'm new to Golang, and i'm trying out my first CLI application, using the Cobra framework.
My plan is to have few commands, with many flags.
These flags, don't have to have a value attached to them, since they can simply be -r to restart the device.
Currently, i have the following working, but i keep thinking, that this cannot be the correct way to do it.
So any help is appreciated.
The logic is currently, that each command, get's a default value attached to it, and then i look for this, in the run command, and triggers my function, once it captures it.
My "working code" looks like below.
My init function, in the command contains the following.
chargerCmd.Flags().StringP("UpdateFirmware", "u", "", "Updeates the firmware of the charger")
chargerCmd.Flags().Lookup("UpdateFirmware").NoOptDefVal = "yes"
chargerCmd.Flags().StringP("reboot", "r", "", "Reboots the charger")
chargerCmd.Flags().Lookup("reboot").NoOptDefVal = "yes"
And the run section looks like this.
Run: func(cmd *cobra.Command, args []string) {
input, _ := cmd.Flags().GetString("UpdateFirmware")
if input == "yes" {
fmt.Println("Updating firmware")
UpdateFirmware(os.Getenv("Test"), os.Getenv("Test2"))
}
input, _ = cmd.Flags().GetString("reboot")
if input == "yes" {
fmt.Println("Rebooting Charger")
}
},
Maybe to make the usage a bit cleaner, as stated in the comment from Burak - you can better differentiate between commands and flags. With cobra you have the root command and sub-commands attached to the root command. Additionaly each command can accept flags.
In your case, charger is the root commands and you want two sub-commands: update_firmware and reboot.
So as an example to reboot the charger, you would execute the command:
$ charger reboot
In the code above, you are trying to define sub-commands as flags, which is possible, but likely not good practice.
Instead, the project should be set-up something like this: https://github.com/hesamchobanlou/stackoverflow/tree/main/74934087
You can then move the UpdateFirmware(...) operation within the respective command definition under cmd/update_firmware.go instead of trying to check each flag variation on the root chargerCmd.
If that does not help, provide some more details on why you think your approach might not be correct?

Can reuse makefile multiple times within single execution?

Suppose I have:
# ./Makefile
CLUSTER=dev
include Makefile.cluster.mk
CLUSTER=local
include Makefile.cluster.mk
And in:
# ./Makefile.cluster.mk
${CLUSTER}.cmd:
cmd ${CLUSTER}
So now I can call:
make dev.cmd
make local.cmd
Great! Except the variable is evaluated too late. Running:
$ make local.cmd # cmd local
$ make dev.cmd # Also cmd local !
Make sense: according to: https://www.gnu.org/software/make/manual/html_node/Reading-Makefiles.html
rule steps are deferred evaluation (vs. immediate/on file load).
immediate : immediate ; deferred
deferred
Is there a better/other way to compose a set of make commands without maintaining multiple copies of the same file?
There are lots of ways to do it, even beyond the options above; you could use static pattern rules:
CLUSTERS := dev local
$(CLUSTERS:%=%.cmd) : %.cmd :
cmd $*
If you really want to have stuff in a separate makefile you can use target-specific variables; change your Makefile.cluster.mk to do this:
# ./Makefile.cluster.mk
${CLUSTER}.cmd: CLUSTER := $(CLUSTER)
${CLUSTER}.cmd:
cmd ${CLUSTER}
Is there a better/other way to compose a set of make commands without maintaining multiple copies of the same file?
Often it's pattern rules. In the case of your particular example, you might do
Makefile
%.cmd:
cmd '$*'
However, that particular version will enable any make foo.cmd, which might not be what you want.
Sometimes it's to make better use of the tools available to you. For example,
Makefile.cluster.mk
${CLUSTER}.cmd:
arg='$#'; cmd "$${arg%.cmd}"
That extracts the wanted cluster name from the name of the target.
Occasionally it is $(eval).
(See the manual for an example.)
And from time to time, it's "don't do that." For example,
Makefile
CLUSTERS = dev local
CMDS = $(patsubst %,%.cmd,$(CLUSTERS))
$(CMDS):
arg='$#'; cmd "$${arg%.cmd}"
That defines only dev.cmd and local.cmd targets, and avoids duplicating the recipe.

Conditional inclusion of patch file in recipe script

I have recipe file and my SRC_URI section looks something as follows:
SRC_URI += "file://file1.patch \
file://file2.patch \
file://file4.patch \
"
I want to include a file5.patch under the SRC_URI only if a certain environment variable is set. Is there a way to insert a if condition with the SRC_URI that looks something like this:
SRC_URI += "file://file1.patch \
file://file2.patch \
file://file4.patch \
**if $ENVIRONMENT_VARIABLE:
file://file5.patch**
"
Is there any other way I can achieve the same thing?
Well, the short answer is: yes, you can do this, but it's messy and there's probably a Better Way(TM). So let's answer the question first. If you really want to change the behavior of a recipe using an environment variable, the first challenge is to set the environment variable, and then let bitbake know that your new environment variable is safe and allowable. When you source the oe-init-build-env script to setup your project or subsequently to setup your new shell to continue working on the project, it sets an env variable called BB_ENV_EXTRAWHITE. You must include your new env variable in this list like this:
$ export MYENV_VAR=file5.patch
$ export BB_ENV_EXTRAWHITE="$BB_ENV_EXTRAWHITE MYENV_VAR"
Once this is done, then bitbake won't scrub the environment of your new environment variable.
In your recipe, use a python snippet to conditionally add your patch as follows:
SRC_URI += "${#os.getenv('MYENV_VAR', '')}"
As you can see, it's a bit messy. Of course, you could get a little more complex and test the value of the variable in your recipe, instead of putting the name of the patch file in your environment variable, but this example was the simplest way to demonstrate the concept.
Perhaps a better way is to use an override, and not rely on environment variables. If you are building a bsp with multiple variants, you could use your bsp name as the override, something like this.
SRC_URI_append_mybsp = "file://file5.patch"
This is a much cleaner way to accomplish the same thing. Of course, I'm speculating about your use case. The yocto project reference manual explains overrides. One more suggestion, join #yocto or the yocto project mailing list and you will have access to many smart people to help you.
Hope this helps. ;)
The proper way to accomplish this would be as follows,
1. local.conf
# comment the following line to remove file5.patch
ENV_VAR = "1"
NOTE: Don't forget to include the double quotes, otherwise Yocto will throw error.
2. recipe.bbappend
SRC_URI += "${#bb.utils.contains('ENV_VAR', '1', 'file://file5.patch', '', d)}"
Instead of local.conf you're free to use any .conf file. It's taken from Yocto mailing list

RRDTool - Not a valid vname

I am using RRDTool to manage and graph my Performance Data.
Currently, I am calling RRDTool from a bash script, but I get an Error, which I can not fix.
My Bash Script looks like this:
# Call RRDTool and save the output
output=$(rrdtool graph --width 4000 \
DEF:data=/usr/local/pnp4nagios/var/perfdata/FM/win_tcpu.rrd:proc:AVERAGE \
VDEF:slope=data,LSLSLOPE \
PRINT:slope:'%lf')
echo "Output: " $output
I am trying to solve for m in y = m*x + b with this simple example.
My Performance Data looks something like this:
<NAGIOS>
<DATASOURCE>
<TEMPLATE>nrpe_win_tcpu</TEMPLATE>
<RRDFILE>/usr/local/pnp4nagios/var/perfdata/FM/win_tcpu.rrd</RRDFILE>
<RRD_STORAGE_TYPE>SINGLE</RRD_STORAGE_TYPE>
<RRD_HEARTBEAT>8460</RRD_HEARTBEAT>
<IS_MULTI>0</IS_MULTI>
<DS>1</DS>
<NAME>proc</NAME>
...
</DATASOURCE>
...
As you can see the file name is correct and the DataSource Name is also correct.
My Problem is that the DEF does not seem to work. I get the following Error Message:
ERROR: Not a valid vname: data in line VDEF:slope=data,LSLSLOPE
When trying to access the data saved in Variable data like this, just to check if the problem is in my VDEF line:
LINE1:data#0000FF:"data" \
I get the following Error:
ERROR: parameter 'data' does not represent a number in line LINE1:data#0000FF:data
This clarifies that the problem is somewhere in the DEF line but I have no clue why this is happening.
Does anybody have an idea why I get these error messages and how to fix this problem?
Any help appreciated. Thanks in advance.
I found the problem. It was only a really dumb mistake I made.
I was calling:
rrdtool graph --width 4000 \
DEF:data=/usr/local/pnp4nagios/var/perfdata/FM/win_tcpu.rrd:proc:AVERAGE \
VDEF:slope=data,LSLSLOPE \
...
Looking in the usage description of RRDTool graph again, showed me this:
rrdtool graph filename [-s|--start seconds] [-e|--end seconds] ...
I was simply missing a filename. When calling it like mentioned above RRDTool interpreted my DEF line as a filename. This would cause an error when writing the result to this file, but RRDTool exits with an error on the VDEF line, since data was never defined.
Thanks for all the comments. I just wanted to clarify some things I tested:
- The variable Name data does not cause any problems
- It is totally working to have a DS with name 1 and you could also access it in a CDEF and so on
Thanks for the help!
You can't do a line of a VDEF - it's a value, and is only valid with an aggregation function.
From: http://oss.oetiker.ch/rrdtool/doc/rrdgraph_rpn.en.html LSLSLOPE is valid.
You should be able to graph data though, e.g.
LINE1:data#00CC00:data
That you can't - and get 'does not represent a number' makes me wonder what's in you data source. I would suggest you have a look at xport to dump the RRD and see what's in there.
You might well be not getting enough input data to build a CDP within your RRD, and so they're all UNKNOWN or NaN.
(Invalid VNAME does make me wonder though - have you tried changing it to something other than data - which seems like it could be a reserved word.)

flag package in Go - do I have to always set default value?

Is it possible not to set default value in flag package in Go? For example, in flag package you can write out the following line:
filename := flag.String("file", "test.csv", "Filename to cope with")
In the above code, I don't want to necessarily set default value, which is test.csv in this case, and instead always make users specify their own filename, and if it's not specified then I want to cause an error and exit the program.
One of the way I came up with is that I first check the value of filename after doing flag.Parse(), and if that value is test.csv then I have the program exits with the appropriate error message. However, I don't want to write such redundant code if it can be evaded - and even if it can't, I'd like to hear any better way to cope with the issue here.
You can do those kind of operations in Python's argparse module by the way - I just want to implement the similar thing if I can...
Also, can I implement both short and long arguments (in other words both -f and -file argument?) in flag package?
Thanks.
I think it's idiomatic to design your flag values in such a way which implies "not present" when equal to the zero value of their respective types. For example:
optFile := flag.String("file", "", "Source file")
flag.Parse()
fn := *optFile
if fn == "" {
fn = "/dev/stdin"
}
f, err := os.Open(fn)
...
Ad the 2nd question: IINM, the flag package by design doesn't distinguish between -flag and --flag. IOW, you can have both -f and --file in your flag set and write any version of - or -- before both f and file. However, considering another defined flag -g, the flag package will not recognize -gf foo as being equvalent of -g -f foo.
When I have a flag that cannot have a default value I often use the value REQUIRED or something similar. I find this makes the --help easier to read.
As for why it wasn't baked in, I suspect it just wasn't considered important enough. The default wouldn't fit every need. However, the --help flag is similar; it doesn't fit every need, but it's good enough most of the time.
That's not to say the required flags are a bad idea. If you're passionate enough a flagutil package could be nice. Wrap the current flag api, make Parse return an error that describes the missing flag and add a RequiredInt and RequiredIntVar etc. If it turns out to be useful / popular it could be merged with the official flag package.
This is how I implemented command argument parser.
Since there are already plenty of similar projects, I decided not to add more choices without strong impetus.
Here is an example of how it can be used, which might inspired somebody, or someone might be interested.
# minarg.go
package main
import (
"fmt"
"self/argp"
)
func main() {
p := argp.New(nil)
p.Bool("continue",nil,"-v","-g")
f := func(m, arg string) {
switch m {
case "__init__":
case "__defer__":
p.Set("server", p.GetString("-s") + ":" + p.GetString("-p"))
default:
arg, _ := p.Shift()
p.Set(m, arg)
}
}
p.Mode(f,"__init__","__defer__","-s","-p","-nstat","-n")
p.Default("-s","127.0.0.1", "-p","1080", "-nstat","100", "-n","5")
p.Env("-s","SERVER", "-p","PORT")
p.Parse()
fmt.Println(p.Vars)
}
The output is
$ go run minarg.go
&map[-g:{false continue <nil>} -n:5 -nstat:100 -p:1080 -s:127.0.0.1 -v:{false continue <nil>} server:127.0.0.1:1080]
$ export PORT=80
$ go run minarg.go -s 0.0.0.0 -n 3 -vg
&map[-g:{true continue <nil>} -n:3 -nstat:100 -p:80 -s:0.0.0.0 -v:{true continue <nil>} server:0.0.0.0:80]

Resources