Bash export variables received lines by lines - bash

I got a simple problem but very boring.
The goal is to write a shell script that run on EC2 instance to exports tags for rest of the script... Something like:
ec2-describe-tags [...]
| while IFS=':' read name value; do
export "$name"="$value"
done
Not so uggly but don't work, of course cause the export is in the while loop, executed in pipe.
My question is: how to write this correctly? Of course I cannot predict names nor numbers of received tags.

Try this:
while IFS=: read name value; do
export $name="$value"
done < <(ec2-describe-instance ...)
A pipeline runs the commands in a subshell, so the variables don't persist when it's done.

Since it seems that the output consists solely of lines of the form name:value, you should just be able to do
while read; do
export "$REPLY" # Using default variable set by read
done < <( ec2-describe-tags ... | sed 's/:/=' )
You could even get fancy with the readarray command (if available) and simply run
readarray -t env_vars < <(ec2-describe-tags ... | sed 's/:/=')
export "${env_vars[#]}"
The process substitution allows the while loop to run in the current shell, so the exported variables will be put in the shell's environment.
If you are using bash 4.2 or later, you can set the lastpipe option, which allows the last command in a pipe to run in the current shell instead of a subshell, allowing you to keep your current pipeline.

Related

Launch process from Bash script in the background, then bring it to foreground

The following is a simplified version of some code I have:
#!/bin/bash
myfile=file.txt
interactive_command > $myfile &
pid=$!
# Use tail to wait for the file to be populated
while read -r line; do
first_output_line=$line
break # we only need the first line
done < <(tail -f $file)
rm $file
# do stuff with $first_output_line and $pid
# ...
# bring `interactive_command` to foreground?
I want to bring interactive_command to the foreground after its first line of output has been stored to a variable, so that a user can interact with it via calling this script.
However, it seems that using fg %1 does not work in the context of a script, and I cannot use fg with the PID. Is there a way that I can do this?
(Also, is there a more elegant way of capturing the first line of output, without writing to a temp file?)
Job control using fg and bg are only available on interactive shells (i.e. when typing commands in a terminal). Usually the shell scripts run in non-interactive shells (same reason why aliases don't work in shell scripts by default)
Since you already have the PID stored in a variable, foregrounding the process is same as waiting on it (See Job Control Builtins). For example you could just do
wait "$pid"
Also what you have is a basic version of coproc bash built-in which allows you get the standard output messages captured from background commands. It exposes two file descriptors stored in an array, using which one can read outputs from stdout or feed inputs to its stdin
coproc fdPair interactive_command
The syntax is usually coproc <array-name> <cmd-to-bckgd>. The array is populated with the file descriptor id's by the built-in. If no variable is used explicitly, it is populated under COPROC variable. So your requirement can be written as
coproc fdPair interactive_command
IFS= read -r -u "${fdPair[0]}" firstLine
printf '%s\n' "$firstLine"

How to set variables whose name and values are taken from a command's output with `declare` in bash?

I need to declare variables in a bash shell script for which both names and values are taken from another command's output.
For the sake of this question, I will use a temporary file tmp:
$ cat tmp
var1="hello world"
var2="1"
... and use it for my mock command below.
In the end, I need to have the variables $var1 and $var2 set with respectively hello world and 1, with the variable names var1 and var2 taken directly from the input.
Here is what I got so far:
$ cat tmp|while read line; do declare $line; done
I know I don't need to use catbut this is to simulate the fact that the input is taken from the output of an other command and not in a file.
This doesn't work. I get:
bash: declare: `world"': not a valid identifier
and
$ echo $var1; echo $var2
$
I don't understand why this doesn't work since I can do this:
declare var1="hello world"
... with expected result. I assumed this would be equivalent, but I'm clearly wrong.
I found this answer as the closest thing to my problem, but not quite since it relies on a file to source. I would like to avoid that. I found other answers that uses eval but I'd prefer to avoid that as well.
Maybe there are subtleties in the use of quotes I don't understand.
If the only way is to use a temporary file and source it that is what I'll do, but I think there must be another way.
A good suggestion when writing a shell script is that always double quoting the variable. Otherwise, it will be affected by the shell word splitting.
while read line; do
declare "$line"
done < <(echo "var1=hello world")
And why echo "var1=hello world" | while read line; do export "$line"; done won't work? Because pipe is a sub-shell, it creates var1 in the sub-shell, it won't impact the current shell. So it can't be set in the current shell.
As an alternative, use process substitution, you can obtain the output as a temporary file. So it will create the variable in the current shell.

Setting environment variables from two lines of output

I have a external program (kind of an authentication token generator) that outputs two lines, two parts of my authentication.
$ get-auth
SomeAuthString1
SomeAuthString2
Then, I want to export these strings into an environment variable, say, AUTH and PIN.
I tried several things in bash, but nothing works.
For example, I can do:
get-auth | read -d$'\4' AUTH PIN
but it fails. AUTH and PIN remain unset. If I do
get-auth | paste -d\ -s | read AUTH PIN
it also fails. The only way I can get the data is by doing
get-auth | { read AUTH; read PIN; }
but obviously only in the subshell. Exporting from that has no result
A bit of research, and I found this answer that might mean that I can't do that (reading a variable from something piped into a read). But I might be wrong. I also found that if I open a subshell with { before the read, the values are available in the subshell until I finish it with }.
Is there any way I can set environment variables from the two-line output? I obviously don't want to save that to a file, and I don't want to set up a FIFO just for that. Are those the only ways of getting that done?
You can use bash process-substitution, <() to achieve the same using the read command.
$ cat file
123
456
With using command-substitution properly you can retain the command output. Using read and \n as the de-limiter as;
$ read -r -d'\n' a b < <(cat file)
$ printf "%s %s\n" "$a" "$b"
123 456
Now the variables are available in the current shell, you can always export it.
The subshell was the issue. I was able to make it work by doing:
read -d $'\4' AUTH PIN < <(get-auth)

Bash: increment a variable from a script every time when I run that script

I want that a variable from a script to be incremented every time when I run that script. Something like this:
#!/bin/bash
n=0 #the variable that I want to be incremented
next_n=$[$n+1]
sed -i "2s/.*/n=$next_n/" ${0}
echo $n
will do the job, but is not so good if I will add other lines to the script before the line in which the variable is set and I forget to update the line sed -i "2s/.*/n=$next_n/" ${0}.
Also I prefer to not use another file in which to keep the variable value.
Some other idea?
#!/bin/bash
n=0;#the variable that I want to be incremented
next_n=$[$n+1]
sed -i "/#the variable that I want to be incremented$/s/=.*#/=$next_n;#/" ${0}
echo $n
A script is run in a subshell, which means its variables are forgotten once the script ends and are not propagated to the parent shell which called it. To run a command list in the current shell, you could either source the script, or write a function. In such a script, plain
(( n++ ))
would work - but only when called from the same shell. If the script should work from different shells, or even after switching the machine off and on again, saving the value in a file is the simplest and best option. It might be easier, though, to store the variable value in a different file, not the script itself:
[[ -f saved_value ]] || echo 0 > saved_value
n=$(< saved_value)
echo $(( n + 1 )) > saved_value
Changing the script when it runs might have strange consequences, especially when you change the size of the script (which might happen at 9 → 10).

Unknown error sourcing a script containing 'typeset -r' wrapped in command substitution

I wish to source a script, print the value of a variable this script defines, and then have this value be assigned to a variable on the command line with command substitution wrapping the source/print commands. This works on ksh88 but not on ksh93 and I am wondering why.
$ cat typeset_err.ksh
#!/bin/ksh
unset _typeset_var
typeset -i -r _typeset_var=1
DIR=init # this is the variable I want to print
When run on ksh88 (in this case, an AIX 6.1 box), the output is as follows:
$ A=$(. ./typeset_err.ksh; print $DIR)
$ echo $A
init
When run on ksh93 (in this case, a Linux machine), the output is as follows:
$ A=$(. ./typeset_err.ksh; print $DIR)
-ksh: _typeset_var: is read only
$ print $A
($A is undefined)
The above is just an example script. The actual thing I wish to accomplish is to source a script that sets values to many variables, so that I can print just one of its values, e.g. $DIR, and have $A equal that value. I do not know in advance the value of $DIR, but I need to copy files to $DIR during execution of a different batch script. Therefore the idea I had was to source the script in order to define its variables, print the one I wanted, then have that print's output be assigned to another variable via $(...) syntax. Admittedly a bit of a hack, but I don't want to source the entire sub-script in the batch script's environment because I only need one of its variables.
The typeset -r code in the beginning is the error. The script I'm sourcing contains this in order to provide a semaphore of sorts--to prevent the script from being sourced more than once in the environment. (There is an if statement in the real script that checks for _typeset_var = 1, and exits if it is already set.) So I know I can take this out and get $DIR to print fine, but the constraints of the problem include keeping the typeset -i -r.
In the example script I put an unset in first, to ensure _typeset_var isn't already defined. By the way I do know that it is not possible to unset a typeset -r variable, according to ksh93's man page for ksh.
There are ways to code around this error. The favorite now is to not use typeset, but just set the semaphore without typeset (e.g. _typeset_var=1), but the error with the code as-is remains as a curiosity to me, and I want to see if anyone can explain why this is happening.
By the way, another idea I abandoned was to grep the variable I need out of its containing script, then print that one variable for $A to be set to; however, the variable ($DIR in the example above) might be set to another variable's value (e.g. DIR=$dom/init), and that other variable might be defined earlier in the script; therefore, I need to source the entire script to make sure I all variables are defined so that $DIR is correctly defined when sourcing.
It works fine for me in ksh93 (Version JM 93t+ 2009-05-01). If I do this, though:
$ . ./typeset_err.ksh
$ A=$(. ./typeset_err.ksh; print $DIR)
-ksh: _typeset_var: is read only
So it may be that you're getting that variable typeset -r in the current environment somehow.
Try this
A=$(ksh -c "./typeset_err.ksh && print \$DIR")
or
A=$(env -i ksh -c "./typeset_err.ksh && print \$DIR")

Resources