Espressif IDF override KConfig variable - makefile

I need to flash a bunch of ESP boards that have a few different compile-time variables ( per board ) that are normally defined in a Kconfig file for a module.
Is there any way to pass some env variable or perhaps a command-line argument to make or idf.py that will update sdkconfig? I can do that with sed no problem, but perhaps there's a better solution?
So I can use something like
for HOST in "a" "b" "c";
do
echo "Plug board for host $HOST and press Enter"
readline
CONFIG_VAR_SMTH_HOST="${HOST}" make flash
echo "Done"
done;

Related

Access value of read input from within a function in Bash

I'm have a function wrapper for read command. I'm trying to automate most of input asking in my script by a function. Below is non-working code.
ask_input() {
_question="$1"
_thevar="$2"
_finalvar=$(eval $(echo $_thevar))
read -ep "${_question}: " "$_thevar"
printf "%s\n" "$_question" "$_thevar" "$_finalvar"
}
Basically what I'm hoping is when I execute following.
ask_input "Do you like apple (yes/no)" the_answer
If an user type yes, each variable will contain following (w/o quotes of course, it was just for easier readability).
$_question --> "Do you like apple (yes/no)"
$_thevar --> "the_answer"
$_finalvar --> "yes"
The eval command is my attempt to solve the problem, but I have not found the actual solution to this.
The main two things you need to change are: 1) use indirect expansion with ! (_finalvar="${!_thevar}") instead of messing with eval, and do that after reading something into that variable. I'd also recommend making all those variables local to the function. So something like this:
ask_input() {
local _question="$1"
local _thevar="$2"
read -ep "${_question}: " "$_thevar"
local _finalvar="${!_thevar}"
printf "%s\n" "$_question" "$_thevar" "$_finalvar"
}
Since those variables are local now, you could probably also remove the _ prefixes (unless you're worried about a conflict with the variable name supplied as $2).

Perl6 REPL usage

Is it possible to have (Rakudo) Perl6 execute some code before dropping you into the REPL? Like python does with "python -i ".
For instance, I want to load up some modules and maybe read a side file and build some data structures from that side file before dropping into the REPL and letting the user do the things they need to do on the data structure, using the REPL as a user interface.
This is similar but different than Start REPL with definitions loaded from file though answers to this question might satisfy that one. The basic case is that, at the end of execution of any program, instead of exiting, the interpreter leaves the user at the REPL. Aside from providing a nifty, built-in, Perl6-based user interface for interactive programs, it also provides a good tool from which to debug code that otherwise exits with an error.
edit:
Selecting Zoffix's solution as the correct (so far) one as it is the only one that satisfies all requirements as stated. Here's hoping this capability gets added to the compiler or language spec.
You can load modules with the -M switch.
$ perl6 -MJSON::Tiny
To exit type 'exit' or '^D'
> to-json Array.new: 1,2,3.Str
[ 1, 2, "3" ]
>
If you want to run other code, currently you have to put it into a module first.
$ mkdir lib
$ echo 'our $bar = 42' > lib/foo.pm6
$ perl6 -Ilib -Mfoo
To exit type 'exit' or '^D'
> $bar
42
>
I'd like to provide an answer that Zoffix gave on IRC. It satisfies the basic requirement but is far from pretty and it uses NQP for which there is no user support nor is the NQP API ( "nqp::*" calls ) guaranteed for the future and can change without warning.
replify 「
say 'Hello to your custom REPL! Type `say $a` to print the secret variable';
my $a = "The value is {rand}";
」;
sub replify (Str:D \pre-code = '') {
use nqp;
my %adverbs; # command line args like --MFoo
my \r := REPL.new: nqp::getcomp('perl6'), %adverbs;
my \enc := %adverbs<encoding>:v.Str;
enc && enc ne 'fixed_8' && $*IN.set-encoding: enc;
my $*CTXSAVE := r;
my $*MAIN_CTX;
pre-code and r.repl-eval: pre-code, $, :outer_ctx(nqp::getattr(r, REPL, '$!save_ctx')),
|%adverbs;
$*MAIN_CTX and nqp::bindattr(r, REPL, '$!save_ctx', $*MAIN_CTX);
r.repl-loop: :interactive, |%adverbs;
}

Makefile: Save a variable during execution time

I'm using Makefiles with "make" for a lot of things like starting / stopping / configuring services I've written. Sometimes I'd like to read an input from the user. The only ways I know are either make the user pass his input with NAME=VALUE when executing make, or by putting a command like read -p "setting X: " var ; echo $$var into the Makefile.
NAME=VALUE has the disadvantage that the user must manually set it and I can't "ask" him to enter a value. read has the disadvantage that the read value can not (or I don't know how) be saved in a variable and so it can't be used multiple times.
Is there a way to read user input into a variable during executing a specific makefile target? (I don't want to put FILE ?= 'read -p "value: " var ; echo $$var' in the header because the value is only needed for one target, and when I put that line in the target itself, I get the error "/bin/bash: FILE: Command not found. ".
I use intermediate files for this purpose.
INPUT = dialog --inputbox 80 10 10
all: case1 case2
case1: read-input
echo $(shell cat read-input) in case 1
case2: read-input
echo $(shell cat read-input) in case 2
.INTERMEDIATE: read-input
read-input:
$(INPUT) 2>$#

how to sum ip addresses

I'm creating a *.deb package that transform your wireless card into an hotspot.
I'm stuck at the configurations:
I have to write a postinst file in which I ask to the user what ip address he likes for his hotspot and then use it to generate the range & the subnet addresses for the isc-dhcp-server.
Something like that:
10.10.0.01 + 0.0.0.9 = 10.10.0.10
I know how to assign strings and numbers to variables and how to ask to user his choosen IP, but how to modify a variable and assign the result to another one? expr thinks it's a floating number and won't work.
Hoping that everything it's clear enough,
waiting for a help,
thank you in advance
Avoid leading zeros.
IFS="." read -a a <<< 10.10.0.1
IFS="." read -a b <<< 0.0.0.9
s="$[a[0]+b[0]].$[a[1]+b[1]].$[a[2]+b[2]].$[a[3]+b[3]]"
echo $s
Output:
10.10.0.10
Ok, I found a workaround method:
when I ask to the user its choosen ip I use these:
IFS="." read -r a b c d
choosenip="$a.$b.$c.$d"
subnetip="$a.$b.$c.0"
rangeipmin="$a.$b.$c.20"
rangeipmax="$a.$b.$c.30"
IFS change the default "space" or "tab" to whatever you want.
So when I have to put these in the dhcpd.conf with "echo", I just have to call the variables.
If you have more elegant ways to do that, you're welcome.
Thank you

How to find or make a Bash utility script library? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
This post was edited and submitted for review 23 days ago and failed to reopen the post:
Opinion-based Update the question so it can be answered with facts and citations by editing this post.
Improve this question
Is there any commonly used (or unjustly uncommonly used) utility "library" of bash functions? Something like Apache commons-lang for Java. Bash is so ubiquitous that it seems oddly neglected in the area of extension libraries.
If not, how would I make one?
Libraries for bash are out there, but not common. One of the reasons that bash libraries are scarce is due to the limitation of functions. I believe these limitations are best explained on "Greg's Bash Wiki":
Functions. Bash's "functions" have several issues:
Code reusability: Bash functions don't return anything; they only produce output streams. Every reasonable method of capturing that stream and either assigning it to a variable or passing it as an argument requires a SubShell, which breaks all assignments to outer scopes. (See also BashFAQ/084 for tricks to retrieve results from a function.) Thus, libraries of reusable functions are not feasible, as you can't ask a function to store its results in a variable whose name is passed as an argument (except by performing eval backflips).
Scope: Bash has a simple system of local scope which roughly resembles "dynamic scope" (e.g. Javascript, elisp). Functions see the locals of their callers (like Python's "nonlocal" keyword), but can't access a caller's positional parameters (except through BASH_ARGV if extdebug is enabled). Reusable functions can't be guaranteed free of namespace collisions unless you resort to weird naming rules to make conflicts sufficiently unlikely. This is particularly a problem if implementing functions that expect to be acting upon variable names from frame n-3 which may have been overwritten by your reusable function at n-2. Ksh93 can use the more common lexical scope rules by declaring functions with the "function name { ... }" syntax (Bash can't, but supports this syntax anyway).
Closures: In Bash, functions themselves are always global (have "file scope"), so no closures. Function definitions may be nested, but these are not closures, though they look very much the same. Functions are not "passable" (first-class), and there are no anonymous functions (lambdas). In fact, nothing is "passable", especially not arrays. Bash uses strictly call-by-value semantics (magic alias hack excepted).
There are many more complications involving: subshells; exported functions; "function collapsing" (functions that define or redefine other functions or themselves); traps (and their inheritance); and the way functions interact with stdio. Don't bite the newbie for not understanding all this. Shell functions are totally f***ed.
Source: http://mywiki.wooledge.org/BashWeaknesses
One example of a shell "library" is /etc/rc.d/functions on Redhat based system. This file contains functions commonly used in sysV init script.
I see some good info and bad info here. Let me share what I know since bash is the primary language I use at work (and we build libraries..).
Google has a decent write up on bash scripts in general that I thought was a good read: https://google.github.io/styleguide/shell.xml.
Let me start by saying you should not think of a bash library as you do libraries in other languages.
There are certain practices that must be enforced to keep a library in bash simple, organized, and most importantly, reusable.
There is no concept of returning anything from a bash function except for strings that it prints and the function's exit status (0-255).
There are expected limitations here and a learning curve especially if you're accustomed to functions of higher-level languages.
It can be weird at first, and if you find yourself in a situation where strings just aren't cutting it, you'll want to leverage an external tool such as jq.
If jq (or something like it) is available, you can start having your functions print formatted output to be parsed & utilized as you would an object, array, etc.
Function Declarations
There are two ways to declare a function in bash.
One operates within your current shell, we'll call is Fx0.
And one spawns a subshell to operate in, we'll call that Fx1.
Here are examples of how they're declared:
Fx0(){ echo "Hello from $FUNCNAME"; }
Fx1()( echo "Hello from $FUNCNAME" )
These 2 functions perform the same operation - indeed.
However, there is a key difference here.
Fx1 cannot perform any action that alters the current shell.
That means modifying variables, changing shell options and declaring other functions.
The latter is what can be exploited to prevent name spacing issues that can easily creep up on you.
# Fx1 cannot change the variable from a subshell
Fx0(){ Fx=0; }
Fx1()( Fx=1 )
Fx=foo; Fx0; echo $Fx
# 0
Fx=foo; Fx1; echo $Fx
# foo
That being said, The only time you should use an "Fx0" kind of function is when you're wanting to redeclare something in the current shell.
Always use "Fx1" functions because they are safer and you you don't have to worry about the naming of any functions declared within it.
As you can see below, the innocent function is overwritten inside of Fx1, however, it remains unscathed after the execution of Fx1.
innocent_function()(
echo ":)"
)
Fx1()(
innocent_function()( true )
innocent_function
)
Fx1 #prints nothing, just returns true
innocent_function
# :)
This would have (likely) unintended consequences if you had used curly braces.
Examples of useful "Fx0" type functions would be specifically for changing the current shell, like so:
use_strict(){
set -eEu -o pipefail
}
enable_debug(){
set -Tx
}
disable_debug(){
set +Tx
}
Regarding Declarations
The use of global variables, or at least those expected to have a value, is bad practice all the way around.
As you're building a library in bash, you don't ever want a function to rely on an external variable already being set.
Anything the function needs should be supplied to it via the positional parameters.
This is the main problem I see in libraries other folks try to build in bash.
Even if I find something cool, I can't use it because I don't know the names of the variables I need to have set ahead of time.
It leads to digging through all of the code and ultimately just picking out the useful pieces for myself.
By far, the best functions to create for a library are extremely small and don't utilize named variables at all, even locally.
Take the following for example:
serviceClient()(
showUsage()(
echo "This should be a help page"
) >&2
isValidArg()(
test "$(type -t "$1")" = "function"
)
isRunning()(
nc -zw1 "$(getHostname)" "$(getPortNumber)"
) &>/dev/null
getHostname()(
echo localhost
)
getPortNumber()(
echo 80
)
getStatus()(
if isRunning
then echo OK
else echo DOWN
fi
)
getErrorCount()(
grep -c "ERROR" /var/log/apache2/error.log
)
printDetails()(
echo "Service status: $(getStatus)"
echo "Errors logged: $(getErrorCount)"
)
if isValidArg "$1"
then "$1"
else showUsage
fi
)
Typically, what you would see near the top is local hostname=localhost and local port_number=80 which is fine, but it is not necessary.
It is my opinion that these things should be functional-ized as you're building to prevent future pain when all of a sudden some logic needs to be introduced for getting a value, like: if isHttps; then echo 443; else echo 80; fi.
You don't want that kind of logic placed in your main function or else you'll quickly make it ugly and unmanageable.
Now, serviceClient has internal functions that get declared upon invocation which adds an unnoticeable amount of overhead to each run.
The benefit is now you can have service2Client with functions (or external functions) that are named the same as what serviceClient has with absolutely no conflicts.
Another important thing to keep in mind is that redirections can be applied to an entire function upon declaring it. see: isRunning or showUsage
This gets as close to object-oriented-ness as I think you should bother using bash.
. serviceClient.sh
serviceClient
# This should be a help page
if serviceClient isRunning
then serviceClient printDetails
fi
# Service status: OK
# Errors logged: 0
I hope this helps my fellow bash hackers out there.
Here's a list of "worthy of your time" bash libraries that I found after spending an hour or so googling.
https://github.com/mietek/bashmenot/
bashmenot is a library that is used by Halcyon and Haskell on Heroku. The above link points to a complete list of available functions with examples -- impressive quality, quantity and documentation.
http://marcomaggi.github.io/docs/mbfl.html
MBFL offers a set of modules implementing common operations and a script template. Pretty mature project and still active on github
https://github.com/javier-lopez/learn/blob/master/sh/lib
You need to look at the code for a brief description and examples. It has a few years of development in its back.
https://github.com/martinburger/bash-common-helpers
This has the fewer most basic functions. For documentation you also have to look at the code.
Variables declared inside a function but without the local keyword are global.
It's good practice to declare variables only needed inside a function with local to avoid conflicts with other functions and globally (see foo() below).
Bash function libraries need to always be 'sourced'. I prefer using the 'source' synonym instead of the more common dot(.) so I can see it better during debugging.
The following technique works in at least bash 3.00.16 and 4.1.5...
#!/bin/bash
#
# TECHNIQUES
#
source ./TECHNIQUES.source
echo
echo "Send user prompts inside a function to stderr..."
foo() {
echo " Function foo()..." >&2 # send user prompts to stderr
echo " Echoing 'this is my data'..." >&2 # send user prompts to stderr
echo "this is my data" # this will not be displayed yet
}
#
fnRESULT=$(foo) # prints: Function foo()...
echo " foo() returned '$fnRESULT'" # prints: foo() returned 'this is my data'
echo
echo "Passing global and local variables..."
#
GLOBALVAR="Reusing result of foo() which is '$fnRESULT'"
echo " Outside function: GLOBALVAR=$GLOBALVAR"
#
function fn()
{
local LOCALVAR="declared inside fn() with 'local' keyword is only visible in fn()"
GLOBALinFN="declared inside fn() without 'local' keyword is visible globally"
echo
echo " Inside function fn()..."
echo " GLOBALVAR=$GLOBALVAR"
echo " LOCALVAR=$LOCALVAR"
echo " GLOBALinFN=$GLOBALinFN"
}
# call fn()...
fn
# call fnX()...
fnX
echo
echo " Outside function..."
echo " GLOBALVAR=$GLOBALVAR"
echo
echo " LOCALVAR=$LOCALVAR"
echo " GLOBALinFN=$GLOBALinFN"
echo
echo " LOCALVARx=$LOCALVARx"
echo " GLOBALinFNx=$GLOBALinFNx"
echo
The sourced function library is represented by...
#!/bin/bash
#
# TECHNIQUES.source
#
function fnX()
{
local LOCALVARx="declared inside fnX() with 'local' keyword is only visible in fnX()"
GLOBALinFNx="declared inside fnX() without 'local' keyword is visible globally"
echo
echo " Inside function fnX()..."
echo " GLOBALVAR=$GLOBALVAR"
echo " LOCALVARx=$LOCALVARx"
echo " GLOBALinFNx=$GLOBALinFNx"
}
Running TECHNIQUES produces the following output...
Send user prompts inside a function to stderr...
Function foo()...
Echoing 'this is my data'...
foo() returned 'this is my data'
Passing global and local variables...
Outside function: GLOBALVAR=Reusing result of foo() which is 'this is my data'
Inside function fn()...
GLOBALVAR=Reusing result of foo() which is 'this is my data'
LOCALVAR=declared inside fn() with 'local' keyword is only visible in fn()
GLOBALinFN=declared inside fn() without 'local' keyword is visible globally
Inside function fnX()...
GLOBALVAR=Reusing result of foo() which is 'this is my data'
LOCALVARx=declared inside fnX() with 'local' keyword is only visible in fnX()
GLOBALinFNx=declared inside fnX() without 'local' keyword is visible globally
Outside function...
GLOBALVAR=Reusing result of foo() which is 'this is my data'
LOCALVAR=
GLOBALinFN=declared inside fn() without 'local' keyword is visible globally
LOCALVARx=
GLOBALinFNx=declared inside fnX() without 'local' keyword is visible globally
I found a good but old article here that gave a comprehensive list of utility libraries:
http://dberkholz.com/2011/04/07/bash-shell-scripting-libraries/
I can tell you that the lack of available function libraries has nothing to do with Bash's limitations, but rather how Bash is used. Bash is a quick and dirty language made for automation, not development, so the need for a library is rare. Then, you start to have a fine line between a function that needs to be shared, and converting the function into a full fledged script to be called. This is from a coding perspective, to be loaded by a shell is another matter, but normally runs on personal taste, not need. So... again a lack of shared libraries.
Here are a few functions I use regularly
In my .bashrc
cd () {
local pwd="${PWD}/"; # we need a slash at the end so we can check for it, too
if [[ "$1" == "-e" ]]; then
shift
# start from the end
[[ "$2" ]] && builtin cd "${pwd%/$1/*}/${2:-$1}/${pwd##*/$1/}" || builtin cd "$#"
else
# start from the beginning
if [[ "$2" ]]; then
builtin cd "${pwd/$1/$2}"
pwd
else
builtin cd "$#"
fi
fi
}
And a version of a log()/err() exists in a function library at work for coders-- mainly so we all use the same style.
log() {
echo -e "$(date +%m.%d_%H:%M) $#"| tee -a $OUTPUT_LOG
}
err() {
echo -e "$(date +%m.%d_%H:%M) $#" |tee -a $OUTPUT_LOG
}
As you can see, the above utilities we use here, are not that exciting to share. I have another library to do tricks around bash limitations, which I think is the best use for them and I recommend creating your own.

Resources