I need to write a shell script such that I have to read .sh script and find a particular variable (for example, Variable_Name="variable1") and take out is value(variable1).
In other shell script if Variable_Name is used I need to replace it with its Value(variable1)
A simple approach, to build on, might be:
assignment=$(echo 'Variable_Name="variable1"' | sed -r 's/Variable_Name=(.*)/\1/')
echo $assignment
"variable1"
Depending on variable type, the value might be quoted or not, quoted with single apostrophs or quotes. That might be neccessary (String with or without blanks) or superflous. Behind the assignment there might be furter code:
pi=3.14;v=42;
or a comment:
user=janis # Janis Joplin
it might be complicated:
expr="foobar; O'Reilly " # trailing blank important
But only you may know, how complicated it might get. Maybe the simple case is already sufficient. If the new script looks similar, it might work, or not:
targetV=INSERT_HERE; secondV=23
# oops: secondV accidnetally hidden:
targetV="foobar; O'Reilly " # trailing blank important; secondV=23
If the second script is under your control, you can prevent such problems easily. If source and target language are identical, what worked here should work there too.
Related
I've written a bunch of environment variables in Docker format, but now I want to use them outside of that context. How can I source them with one line of bash?
Details
Docker run and compose have a convenient facility for importing a set of environment variables from a file. That file has a very literal format.
The value is used as is and not modified at all. For example if the value is surrounded by quotes (as is often the case of shell variables), the quotes are included in the value passed
Lines beginning with # are treated as comments and are ignored
Blank lines are also ignored.
"If no = is provided and that variable is…exported in your local environment," docker "passes it to the container"
Thankfully, whitespace before the = will cause the run to fail
so, for example, this env-file:
# This is a comment, with an = sign, just to mess with us
VAR1=value1
VAR2=value2
USER
VAR3=is going to = trouble
VAR4=this $sign will mess with things
VAR5=var # with what looks like a comment
#VAR7 =would fail
VAR8= but what about this?
VAR9="and this?"
results in these env variables in the container:
user=ubuntu
VAR1=value1
VAR2=value2
VAR3=is going to = trouble
VAR4=this $sign will mess with things
VAR5=var # with what looks like a comment
VAR8= but what about this?
VAR9="and this?"
The bright side is that once I know what I'm working with, it's pretty easy to predict the effect. What I see is what I get. But I don't think bash would be able to interpret this in the same way without a lot of changes. How can I put this square Docker peg into a round Bash hole?
tl;dr:
source <(sed -E -e "s/^([^#])/export \1/" -e "s/=/='/" -e "s/(=.*)$/\1'/" env.list)
You're probably going to want to source a file, whose contents
are executed as if they were printed at the command line.
But what file? The raw docker env-file is inappropriate, because it won't export the assigned variables such that they can be used by child processes, and any of the input lines with spaces, quotes, and other special characters will have undesirable results.
Since you don't want to hand edit the file, you can use a stream editor to transform the lines to something more bash-friendly. I started out trying to solve this with one or two complex Perl 5 regular expressions, or some combination of tools, but I eventually settled on one sed command with one simple and two extended regular expressions:
sed -E -e "s/^([^#])/export \1/" -e "s/=/='/" -e "s/(=.*)$/\1'/" env.list
This does a lot.
The first expression prepends export to any line whose first character is anything but #.
As discussed, this makes the variables available to anything else you run in this session, your whole point of being here.
The second expression simply inserts a single-quote after the first = in a line, if applicable.
This will always enclose the whole value, whereas a greedy match could lop off some of (e.g.) VAR3, for example
The third expression appends a second quote to any line that has at least one =.
it's important here to match on the = again so we don't create an unmatched quotation mark
Results:
# This is a comment, with an =' sign, just to mess with us'
export VAR1='value1'
export VAR2='value2'
export USER
export VAR3='is going to = trouble'
export VAR4='this $sign will mess with things'
export VAR5='var # with what looks like a comment'
#VAR7 ='would fail'
export VAR8=' but what about this?'
export VAR9='"and this?"'
Some more details:
By wrapping the values in single-quotes, you've
prevented bash from assuming that the words after the space are a command
appropriately brought the # and all succeeding characters into the VAR5
prevented the evaluation of $sign, which, if wrapped in double-quotes, bash would have interpreted as a variable
Finally, we'll take advantage of process substitution to pass this stream as a file to source, bring all of this down to one line of bash.
source <(sed -E -e "s/^([^#])/export \1/" -e "s/=/='/" -e "s/(=.*)$/\1'/" env.list)
Et voilĂ !
I have recently started studying shell script and I'd like to be able to comment out a set of lines in a shell script. I mean like it is in case of C/Java :
/* comment1
comment2
comment3
*/`
How could I do that?
Use : ' to open and ' to close.
For example:
: '
This is a
very neat comment
in bash
'
Multiline comment in bash
: <<'END_COMMENT'
This is a heredoc (<<) redirected to a NOP command (:).
The single quotes around END_COMMENT are important,
because it disables variable resolving and command resolving
within these lines. Without the single-quotes around END_COMMENT,
the following two $() `` commands would get executed:
$(gibberish command)
`rm -fr mydir`
comment1
comment2
comment3
END_COMMENT
Note: I updated this answer based on comments and other answers, so comments prior to May 22nd 2020 may no longer apply. Also I noticed today that some IDE's like VS Code and PyCharm do not recognize a HEREDOC marker that contains spaces, whereas bash has no problem with it, so I'm updating this answer again.
Bash does not provide a builtin syntax for multi-line comment but there are hacks using existing bash syntax that "happen to work now".
Personally I think the simplest (ie least noisy, least weird, easiest to type, most explicit) is to use a quoted HEREDOC, but make it obvious what you are doing, and use the same HEREDOC marker everywhere:
<<'###BLOCK-COMMENT'
line 1
line 2
line 3
line 4
###BLOCK-COMMENT
Single-quoting the HEREDOC marker avoids some shell parsing side-effects, such as weird subsitutions that would cause crash or output, and even parsing of the marker itself. So the single-quotes give you more freedom on the open-close comment marker.
For example the following uses a triple hash which kind of suggests multi-line comment in bash. This would crash the script if the single quotes were absent. Even if you remove ###, the FOO{} would crash the script (or cause bad substitution to be printed if no set -e) if it weren't for the single quotes:
set -e
<<'###BLOCK-COMMENT'
something something ${FOO{}} something
more comment
###BLOCK-COMMENT
ls
You could of course just use
set -e
<<'###'
something something ${FOO{}} something
more comment
###
ls
but the intent of this is definitely less clear to a reader unfamiliar with this trickery.
Note my original answer used '### BLOCK COMMENT', which is fine if you use vanilla vi/vim but today I noticed that PyCharm and VS Code don't recognize the closing marker if it has spaces.
Nowadays any good editor allows you to press ctrl-/ or similar, to un/comment the selection. Everyone definitely understands this:
# something something ${FOO{}} something
# more comment
# yet another line of comment
although admittedly, this is not nearly as convenient as the block comment above if you want to re-fill your paragraphs.
There are surely other techniques, but there doesn't seem to be a "conventional" way to do it. It would be nice if ###> and ###< could be added to bash to indicate start and end of comment block, seems like it could be pretty straightforward.
After reading the other answers here I came up with the below, which IMHO makes it really clear it's a comment. Especially suitable for in-script usage info:
<< ////
Usage:
This script launches a spaceship to the moon. It's doing so by
leveraging the power of the Fifth Element, AKA Leeloo.
Will only work if you're Bruce Willis or a relative of Milla Jovovich.
////
As a programmer, the sequence of slashes immediately registers in my brain as a comment (even though slashes are normally used for line comments).
Of course, "////" is just a string; the number of slashes in the prefix and the suffix must be equal.
I tried the chosen answer, but found when I ran a shell script having it, the whole thing was getting printed to screen (similar to how jupyter notebooks print out everything in '''xx''' quotes) and there was an error message at end. It wasn't doing anything, but: scary. Then I realised while editing it that single-quotes can span multiple lines. So.. lets just assign the block to a variable.
x='
echo "these lines will all become comments."
echo "just make sure you don_t use single-quotes!"
ls -l
date
'
what's your opinion on this one?
function giveitauniquename()
{
so this is a comment
echo "there's no need to further escape apostrophes/etc if you are commenting your code this way"
the drawback is it will be stored in memory as a function as long as your script runs unless you explicitly unset it
only valid-ish bash allowed inside for instance these would not work without the "pound" signs:
1, for #((
2, this #wouldn't work either
function giveitadifferentuniquename()
{
echo nestable
}
}
Here's how I do multiline comments in bash.
This mechanism has two advantages that I appreciate. One is that comments can be nested. The other is that blocks can be enabled by simply commenting out the initiating line.
#!/bin/bash
# : <<'####.block.A'
echo "foo {" 1>&2
fn data1
echo "foo }" 1>&2
: <<'####.block.B'
fn data2 || exit
exit 1
####.block.B
echo "can't happen" 1>&2
####.block.A
In the example above the "B" block is commented out, but the parts of the "A" block that are not the "B" block are not commented out.
Running that example will produce this output:
foo {
./example: line 5: fn: command not found
foo }
can't happen
Simple solution, not much smart:
Temporarily block a part of a script:
if false; then
while you respect syntax a bit, please
do write here (almost) whatever you want.
but when you are
done # write
fi
A bit sophisticated version:
time_of_debug=false # Let's set this variable at the beginning of a script
if $time_of_debug; then # in a middle of the script
echo I keep this code aside until there is the time of debug!
fi
in plain bash
to comment out
a block of code
i do
:||{
block
of code
}
This question already has answers here:
Why a variable assignment replaces tabs with spaces
(2 answers)
Closed 7 years ago.
I'm having some troubles with the printf function in bash.
I wrote a little script on which I pass a name and two letters (such as "sh", "py", "ht") and it creates a file in the current working directory named "name.extension".
For instance, if I execute seed test py a file named test.py is created in the current working dir with the shebang #!/usr/bin/python3.
So far, so good, nothing fancy: I'm learning shell scripting and I thought this could be a simple exercise to test the knowledge gained so far.
The problem is when I want to create an HTML file. This is the function that I use:
creaHtml(){
head='<!--DOCTYPE html-->\n<html>\n\t<head>\n\t\t<meta charset=\"UTF-8\">\n\t</head>\n\t<body>\n\t</body>\n</html>'
percorso=$CARTELLA_CORRENTE/$NOME_FILE.html
printf $head>>$percorso
chmod 755 $percorso
}
If I run, for instance, seed test ht the correct function (creaHtml) is called, test.html is created but if I try to look into it I only see:
<!--DOCTYPE
And nothing else.
This is the trace for that function:
[sviluppo:~/bin]$ seed test ht
+ creaHtml
+ head='<!--DOCTYPE html-->\n<html>\n\t<head>\n\t\t<meta charset=\"UTF-8\">\n\t</head>\n\t<body>\n\t</body>\n</html>'
+ percorso=/home/sviluppo/bin/test.html
+ printf '<!--DOCTYPE' 'html-->\n<html>\n\t<head>\n\t\t<meta' 'charset=\"UTF-8\">\n\t</head>\n\t<body>\n\t</body>\n</html>'
+ chmod 755 /home/sviluppo/bin/test.html
+ set +x
However, if I try to run printf '<!--DOCTYPE html-->\n<html>\n\t<head>\n\t\t<meta charset=\"UTF-8\">\n\t</head>\n\t<body>\n\t</body>\n</html>' from the terminal, I see the correct output: the "skeleton" of an HTML file neatly displayed with indentation and everything. What am I missing here?
Try echo -e instead of printf. printf is for printing formatted strings. Since you didn't protect $head with quotes, bash splits the string to form the command. The first word (before first white space) forms the format string. The rest are just arguments for things you didn't specify to print.
echo -e "$head" > "$percorso"
The -e evaluates your \n into newlines. I changed your >> to > since it looks like you want this to be the whole file, rather than append to any existing file you might have.
You have to be careful with quotes in bash. One thing can become many things. This actually makes it more powerful, but it can be confusing for people learning. Notice that I also put the file name "$percorso" in double quotes too. This evaluates the variable and makes sure that it ends up as one thing. If you use single quotes, it will be one word, but not evaluated. Unlike Python, there is a big difference between single and double quotes.
If you want to use printf for compatibility as #chepner pointed out, just be sure to quote it:
printf "$head" > "$percorso"
Actually that is much simpler anyway.
I found a great answer for how to comment in bash script (by #sunny256):
#!/bin/bash
echo before comment
: <<'END'
bla bla
blurfl
END
echo after comment
The ' and ' around the END delimiter are important, otherwise things inside the block like for example $(command) will be parsed and executed.
This may be ugly, but it works and I'm keen to know what it means. Can anybody explain it simply? I did already find an explanation for : that it is no-op or true. But it does not make sense to me to call no-op or true anyway....
I'm afraid this explanation is less "simple" and more "thorough", but here we go.
The goal of a comment is to be text that is not interpreted or executed as code.
Originally, the UNIX shell did not have a comment syntax per se. It did, however, have the null command : (once an actual binary program on disk, /bin/:), which ignores its arguments and does nothing but indicate successful execution to the calling shell. Effectively, it's a synonym for true that looks like punctuation instead of a word, so you could put a line like this in your script:
: This is a comment
It's not quite a traditional comment; it's still an actual command that the shell executes. But since the command doesn't do anything, surely it's close enough: mission accomplished! Right?
The problem is that the line is still treated as a command beyond simply being run as one. Most importantly, lexical analysis - parameter substitution, word splitting, and such - still takes place on those destined-to-be-ignored arguments. Such processing means you run the risk of a syntax error in a "comment" crashing your whole script:
: Now let's see what happens next
echo "Hello, world!"
#=> hello.sh: line 1: unexpected EOF while looking for matching `''
That problem led to the introduction of a genuine comment syntax: the now-familiar # (which was first introduced in the C shell created at BSD). Everything from # to the end of the line is completely ignored by the shell, so you can put anything you like there without worrying about syntactic validity:
# Now let's see what happens next
echo "Hello, world!"
#=> Hello, world!
And that's How The Shell Got Its Comment Syntax.
However, you were looking for a multi-line (block) comment, of the sort introduced by /* (and terminated by */) in C or Java. Unfortunately, the shell simply does not have such a syntax. The normal way to comment out a block of consecutive lines - and the one I recommend - is simply to put a # in front of each one. But that is admittedly not a particularly "multi-line" approach.
Since the shell supports multi-line string-literals, you could just use : with such a string as an argument:
: 'So
this is all
a "comment"
'
But that has all the same problems as single-line :. You could also use backslashes at the end of each line to build a long command line with multiple arguments instead of one long string, but that's even more annoying than putting a # at the front, and more fragile since trailing whitespace breaks the line-continuation.
The solution you found uses what is called a here-document. The syntax some-command <<whatever causes the following lines of text - from the line immediately after the command, up to but not including the next line containing only the text whatever - to be read and fed as standard input to some-command. Here's an alternate shell implementation of "Hello, world" which takes advantage of this feature:
cat <<EOF
Hello, world
EOF
If you replace cat with our old friend :, you'll find that it ignores not only its arguments but also its input: you can feed whatever you want to it, and it will still do nothing (and still indicate that it did that nothing successfully).
However, the contents of a here-document do undergo string processing. So just as with the single-line : comment, the here-document version runs the risk of syntax errors inside what is not meant to be executable code:
#!/bin/sh -e
: <<EOF
(This is a backtick: `)
EOF
echo 'In modern shells, $(...) is preferred over backticks.'
#=> ./demo.sh: line 2: bad substitution: no closing "`" in `
The solution, as seen in the code you found, is to quote the end-of-document "sentinel" (the EOF or END or whatever) on the line introducing the here document (e.g. <<'EOF'). Doing this causes the entire body of the here-document to be treated as literal text - no parameter expansion or other processing occurs. Instead, the text is fed to the command unchanged, just as if it were being read from a file. So, other than a line consisting of nothing but the sentinel, the here-document can contain any characters at all:
#!/bin/sh -e
: <<'EOF'
(This is a backtick: `)
EOF
echo 'In modern shells, $(...) is preferred over backticks.'
#=> In modern shells, $(...) is preferred over backticks.
(It is worth noting that the way you quote the sentinel doesn't matter - you can use <<'EOF', <<E"OF", or even <<EO\F; all have the same result. This is different from the way here-documents work in some other languages, such as Perl and Ruby, where the content is treated differently depending on the way the sentinel is quoted.)
Notwithstanding any of the above, I strongly recommend that you instead just put a # at the front of each line you want to comment out. Any decent code editor will make that operation easy - even plain old vi - and the benefit is that nobody reading your code will have to spend energy figuring out what's going on with something that is, after all, intended to be documentation for their benefit.
It is called a Here Document. It is a code block that lets you send a list of commands to another command or program
The string following the << is the marker determining the end of the block. If you send commands to no-op, nothing happens, which is why you can use it as a comment block.
That's heredoc syntax. It's a way of defining multi-line string literals.
As the answer at your link explains, the single quotes around the END disables interpolation, similar to the way single-quoted strings disable interpolation in regular bash strings.
The other day I stumbled upon a question on SO. If I wanted to extract the value of HOSTNAME in /etc/sysconfig/network which contains
NETWORKING=yes
HOSTNAME=foo
now I can do grep and cut to get the foo but there was some bash magic involved for a similar issue. I don't know what to search for that and I can't seem to find the question now. it involved something like #{HOSTNAME} . As if it was treating HOSTNAME as a key and foo as a value.
If that configuration file is compatible with shell syntax, simply include it as a shell script. IIRC the files in /etc/sysconfig on Red Hat-like distributions are indeed designed to be parsable by a shell. Note that this means that
If shell special characters may end up in a variable's value, they must be properly quoted. For example, var="value with spaces" requires the quotes. var="with\$dollar" requires the backslash.
The script may run arbitrary code that will be executed, so this is only ok if you trust its content.
If these assumptions are valid, then you can go the simple route:
. /etc/sysconfig/network
echo "$HOSTNAME"
Regarding the quoting and braces, see $VAR vs ${VAR} and to quote or not to quote.