Can someone help me understand printf's alignment function. I have tried reading several examples on Stack and general Google results and I'm still having trouble understanding its syntax. Here is essentially what I'm trying to achieve:
HOLDING 1.1.1.1 Hostname Potential outage!
SKIPPING 1:1:1:1:1:1:1:1 Hostname Existing outage!
I'm sorry, I know this is more of a handout than my usual questions. I really don't know how to start here. I have tried using echo -e "\t" in the past which works for horizontal placement, but not alignment. I have also incorporated a much more complex tcup solution using a for loop, but this will not work easily in this situation.
I just discovered printf's capability though and it seems like it will do what I need, but I dont understand the syntax. Maybe something like this?
A="HOLDING"
B="1.1.1.1"
C="Hostname"
D="Potential outage"
for (( j=1; j<=10; j++ )); do
printf "%-10s" $A $B $C $D
echo "\n"
done
Those variables would be fed in from a db though. I still dont really understand the printf syntax though? Please help
* ALSO *
Off topic question, what is your incentive for responding? I'm fairly new to stack exchange. Do some of you get anything out of it other than reputation. Careers 2.0? or something else? Some people have ridiculous stats on this site. Just curious what the drive is.
The string %-10s can be broken up into multiple parts:
% introduces a conversion specifier, i.e. how to format an argument
- specifies that the field should be left aligned.
10 specifies the field width
s specifies the data type, string.
Bash printf format strings mimic those of the C library function printf(3), and this part is described in man 3 printf.
Additionally, Bash printf, when given more arguments than conversion specifiers, will print the string multiple times for each argument, so that printf "%-10s" foo bar is equivalent to printf "%-10s" foo; printf "%-10s" bar. This is what lets you specify all the arguments on the same command, with the %-10s applying to each of them.
As for people's motivation, you could try the meta site, which is dedicated to questions about stackoverflow itself.
Related
General comment: any new answer which gives a new and useful insight into this question will be rewarded with a bonus.
The Bash reference manual mentions that Bash supports the
following for-loop constructs:
for name [ [in [words ...] ] ; ] do commands; done
for (( expr1 ; expr2 ; expr3 )) ; do commands ; done
Surprisingly, the following for-loop constructs are also valid:
for i in 1 2 3; { echo $i; }
for ((i=1;i<=3;++i)); { echo $i; }
These unusual constructs are not documented at all. Neither the Bash
manual, the Bash man-pages nor The Linux Documentation
Project make any mention of these constructs.
When investigating the language grammar one can see that using
open-and-close braces ({ commands; }) as an alternative to do commands; done is a valid construct that is implemented for both
for-loops and select statements and dates back to Bash-1.14.7
[1].
The other two loop-constructs:
until test-commands; do consequent-commands; done
while test-commands; do consequent-commands; done
do not have this alternative form.
Since a lot of shell-languages are related, one can find that these
constructs are also defined there and mildly documented. The KSH manual mentions:
For historical reasons, open and close braces may be used instead of do and done e.g.
for i; { echo $i; }
while ZSH implements and documents similar alternatives for the other loop-constructs, but with limitations. It states:
For the if, while and until commands, in both these cases the
test part of the loop must also be suitably delimited, such as by
[[ ... ]] or (( ... )), else the end of the test will not be recognized.
Question: What is the origin of this construct and why is
this not propagated to the other loop-constructs?
Update 1: There are some very useful and educational comments below
this post pointing out that this is an undocumented Bourne Shell feature which seems to be the result of a C-vs-sh language battle in the early days.
Update 2: When asking the question: Why is this language feature not documented? to the Gnu Bash mailinglist, I received the following answer from Chet Ramey (current lead-developer of GNU bash):
It's never been documented. The reason bash supports it (undocumented) is
because it was an undocumented Bourne shell feature that we implemented
for compatibility. At the time, 30+ years ago, there were scripts that used
it. I hope those scripts have gone into the dustbin of history, but who
knows how many are using this construct now.
I'm going to leave it undocumented; people should not be using it anyway.
Related questions/answers:
A bash loop with braces?
Hidden features of Bash (this answer)
[U&L] What is the purpose of the “do” keyword in Bash for loops?
Footnotes: [1] I did not find earlier versions, I do believe it predates this
[W]hy is this not propagated to the other loop-constructs?
Braced forms of while and until commands would be syntactically ambiguous because you can't separate test-commands from consequent-commands without having a distinctive delimiter between them as they are both defined by POSIX to be compound lists.
For example, a shell that supports such constructs can choose either one of the brace groups in the command below as consequent-commands and either way it would be a reasonable choice.
while true; { false; }; { break; }
Because of its ambiguous form, this command can be translated to either of the below; neither is a more accurate translation than the other, and they do completely different things.
while true; do
false
done
break
while true; { false; }; do
break
done
The for command is immune to this ambiguity because its first part—a variable name optionally followed by in and a list of words, or a special form of the (( compound command—can easily be distinguished from the brace group that forms its second part.
Given that we already have a consistent syntax for while and until commands, I don't really see any point in propagating this alternate form to them.
Wrt its origin, see:
Characteristical common properties of the traditional Bourne shells,
Stephen Bourne's talk at BSDCon,
Unix v7 source code, sh/cmd.c.
I believe both of the following code snippets are valid in POSIX compliant shell:
Option 1:
if [ "$var" = "dude" ]
then
echo "Dude, your var equals dude."
fi
Option 2:
if test "$var" = "dude"
then
echo "Dude, your var equals dude."
fi
Which syntax is preferred and why? Is there a reason to use one over the other in certain situations?
There is no functional difference, making this a purely stylistic choice with no widely accepted guidelines. The bash-hackers wiki has an extended section on classic (POSIX-compliant) test, with a great deal of attention to best practices and pitfalls, and takes no position on which to prefer.
Moreover, the POSIX specification for test -- while it does mark a great deal of functionality obsolescent1 -- specifies neither form as preferred over the other.
That said, one advantage to test is that it's less conducive to folks bringing in expectations from other languages which result in broken or buggy code. For instance, it's a common error to write [$foo=1] rather than the correct [ "$foo" = 1 ], but folks aren't widely seen to write test$foo=1: It's more visually obvious that test "$foo" = 1 is following the same parsing rules as other shell commands, and thus requires the same care regarding quoting and whitespace.
[1] Such as -a, -o, ( and ), and any usage with more than four arguments (excluding the trailing ] on an instance started under the name [).
This question already has answers here:
What is the difference between $(command) and `command` in shell programming?
(6 answers)
Closed 7 years ago.
So, this question seems a-specific. It is, because I'm not a BASH-programmer, rather a Biologist-turned-writing-some-useful-scripts-for-my-daily-work-scripter. Anyway. Say, I have a for loop, like so:
for CHR $(seq 1 22); do
echo "Processing chromosome ${CHR}";
done
I used to write `seq 1 22` but now I've learned to write $(seq 1 22). Clearly there is a difference in terms of the way you write it. But what is the difference in terms in computer language and interpretation? Can someone explain that to me?
The other thing I learned by simply doing on the command line on our computer cluster, was to call "i" differently. I used to do: $CHR. But when I'd have a file name sometext_chr to which I'd like to add the number (sometext_chr$CHR) that wouldn't work. What does work is sometext_chr${CHR}. Why is that? Can someone help me explain the difference?
Again, I know the question is a bit a-specific - I simply didn't know how to otherwise frame it - but I hope someone can teach me the differences.
Thanks and best!
Sander
The $(...) can be nested easily, as the parentheses clearly indicate where an expression starts and ends. Using `, nesting is not so simple, as the start and end symbols are the same.
Your second example is probably from memory, because it's incorrect. sometext$chr and sometext${chr} would both work the same way. Perhaps what you really meant was a situation like this:
$chr_sometext
${chr}_sometext
The key point here is that _ is a valid character in a variable name. As a result, $chr_sometext is interpreter as the value of the variable chr_sometext. In ${chr}_sometext the variable is clearly chr, and the _sometext that follows it is a literal string value. Just like if you wrote $chrsometext you wouldn't assume that the chr is somehow special. This is the reason you have to add the clarifying braces.
This question already has answers here:
Multidimensional associative arrays in Bash
(2 answers)
Closed 2 years ago.
Can one construct an associative array whose elements contain arrays in bash? For instance, suppose one has the following arrays:
a=(a aa)
b=(b bb bbb)
c=(c cc ccc cccc)
Can one create an associate array to access these variables? For instance,
declare -A letters
letters[a]=$a
letters[b]=$b
letters[c]=$c
and then access individual elements by a command such as
letter=${letters[a]}
echo ${letter[1]}
This mock syntax for creating and accessing elements of the associate array does not work. Do valid expressions accomplishing the same goals exist?
This is the best non-hacky way to do it but you're only limited to accessing single elements. Using indirect variable expansion references is another but you'd still have to store every element set on an array. If you want to have some form of like anonymous arrays, you'd need to have a random parameter name generator. If you don't use a random name for an array, then there's no sense referencing it on associative array. And of course I wouldn't like using external tools to generate random anonymous variable names. It would be funny whoever does it.
#!/bin/bash
a=(a aa)
b=(b bb bbb)
c=(c cc ccc cccc)
declare -A letters
function store_array {
local var=$1 base_key=$2 values=("${#:3}")
for i in "${!values[#]}"; do
eval "$1[\$base_key|$i]=\${values[i]}"
done
}
store_array letters a "${a[#]}"
store_array letters b "${b[#]}"
store_array letters c "${c[#]}"
echo "${letters[a|1]}"
I think the more straightforward answer is "No, bash arrays cannot be nested."
Anything that simulates nested arrays is actually just creating fancy mapping functions for the keyspace of the (single layered) arrays.
Not that that's bad: it may be exactly what you want, but especially when you don't control the keys into your array, doing it properly becomes harder.
Although I like the solution given by #konsolebox of using a delimiter, it ultimately falls over if your keyspace includes keys like "p|q".
It does have a nice benefit in that you can operate transparently on your keys, as in array[abc|def] to look up the key def in array[abc], which is very clear and readable.
Because it relies on the delimiter not appearing in the keys, this is only a good approach when you know what the keyspace looks like now and in all future uses of the code. This is only a safe assumption when you have strict control over the data.
If you need any kind of robustness, I would recommend concatenating hashes of your array keys. This is a simple technique that is extremely likely to eliminate conflicts, although they are possible if you are operating on extremely carefully crafted data.
To borrow a bit from how Git handles hashes, let's take the first 8 characters of the sha512sums of keys as our hashed keys.
If you feel nervous about this, you can always use the whole sha512sum, since there are no known collisions for sha512.
Using the whole checksum makes sure that you are safe, but it is a little bit more burdensome.
So, if I want the semantics of storing an element in array[abc][def] what I should do is store the value in array["$(keyhash "abc")$(keyhash "def")"] where keyhash looks like this:
function keyhash () {
echo "$1" | sha512sum | cut -c-8
}
You can then pull out the elements of the associative array using the same keyhash function.
Funnily, there's a memoized version of keyhash you can write which uses an array to store the hashes, preventing extra calls to sha512sum, but it gets expensive in terms of memory if the script takes many keys:
declare -A keyhash_array
function keyhash () {
if [ "${keyhash_array["$1"]}" == "" ];
then
keyhash_array["$1"]="$(echo "$1" | sha512sum | cut -c-8)"
fi
echo "${keyhash_array["$1"]}"
}
A length inspection on a given key tells me how many layers deep it looks into the array, since that's just len/8, and I can see the subkeys for a "nested array" by listing keys and trimming those that have the correct prefix.
So if I want all of the keys in array[abc], what I should really do is this:
for key in "${!array[#]}"
do
if [[ "$key" == "$(keyhash "abc")"* ]];
then
# do stuff with "$key" since it's a key directly into the array
:
fi
done
Interestingly, this also means that first level keys are valid and can contain values. So, array["$(keyhash "abc")"] is completely valid, which means this "nested array" construction can have some interesting semantics.
In one form or another, any solution for nested arrays in Bash is pulling this exact same trick: produce a (hopefully injective) mapping function f(key,subkey) which produces strings that they can be used as array keys.
This can always be applied further as f(f(key,subkey),subsubkey) or, in the case of the keyhash function above, I prefer to define f(key) and apply to subkeys as concat(f(key),f(subkey)) and concat(f(key),f(subkey),f(subsubkey)).
In combination with memoization for f, this is a lot more efficient.
In the case of the delimiter solution, nested applications of f are necessary, of course.
With that known, the best solution that I know of is to take a short hash of the key and subkey values.
I recognize that there's a general dislike for answers of the type "You're doing it wrong, use this other tool!" but associative arrays in bash are messy on numerous levels, and run you into trouble when you try to port code to a platform that (for some silly reason or another) doesn't have bash on it, or has an ancient (pre-4.x) version.
If you are willing to look into another language for your scripting needs, I'd recommend picking up some awk.
It provides the simplicity of shell scripting with the flexibility that comes with more feature rich languages.
There are a few reasons I think this is a good idea:
GNU awk (the most prevalent variant) has fully fledged associative arrays which can nest properly, with the intuitive syntax of array[key][subkey]
You can embed awk in shell scripts, so you still get the tools of the shell when you really need them
awk is stupidly simple at times, which puts it in stark contrast with other shell replacement languages like Perl and Python
That's not to say that awk is without its failings. It can be hard to understand when you're first learning it because it's heavily oriented towards stream processing (a lot like sed), but it's a great tool for a lot of tasks that are just barely outside of the scope of the shell.
Note that above I said that "GNU awk" (gawk) has multidimensional arrays. Other awks actually do the trick of separating keys with a well-defined separator, SUBSEP. You can do this yourself, as with the array[a|b] solution in bash, but nawk has this feature builtin if you do array[key,subkey]. It's still a bit more fluid and clear than bash's array syntax.
For those stumbling on this question when looking for ways to pass command line arguments within a command line argument, an encoding such as JSON could turn useful, as long as the consumer agrees to use the encoding.
# Usage: $0 --toolargs '["arg 1", "arg 2"]' --otheropt
toolargs="$2"
v=()
while read -r line; do v+=("${line}"); done < <(jq -r '.[]' <<< "${toolargs}")
sometool "${v[#]}"
nestenc='{"a": ["a", "aa "],
"b": ["b", "bb", "b bb"],
"c d": ["c", "cc ", " ccc", "cc cc"]
}'
index="c d"
letter=()
while read -r line; do
letter+=("${line}")
done < <(jq -r ".\"${index}\"[]" <<< "${nestenc}")
for c in "${letter[#]}" ; do echo "<<${c}>>" ; done
The output follows.
<<c>>
<<cc>>
<<ccc>>
<<cc cc>>
In bash I am trying to code a conditional with numbers that are decimals (with fractions). Then I found out that I cannot do decimals in bash.
The script that I have is as follows:
a=$(awk '/average TM cross section = / {CCS=$6}; END {printf "%15.4E \n",CCS}' ${names}_$i.out)
a=$(printf '%.2f\n' $a)
echo $a
In the *.out file the numbers are in scientific-notation. At the end the echo $a results me in a number 245.35 (or other numbers in my files). So, I was wondering how to change the out put number 245.35 in to 24535 so I can do a conditional in bash.
I tried to multiply and that obviously did not work. Can anyone help with this conversion?
You might do best to use something other than bash for your arithmetic -- call out to something with a bit more power. You might find the following links either inspiring or horrifying: http://blog.plover.com/prog/bash-expr.html ("Arithmetic expressions in shell scripts") and http://blog.plover.com/prog/spark.html ("Insane calculations in bash"); I'm afraid this is the sort of thing you're liable to end up with if you seriously try to do bash-based arithmetic. In particular, the to_rational function in the second of those articles includes some code for splitting up decimals using regular expressions, though he's doing something more complicated with them than it sounds like you do.
Per our extended conversation
a=$(awk '/average TM cross section = / {CCS=$6}; END {printf "%15d\n",CCS * 100}' ${names}_$i.out)
Now your output will be an integer.
Note that awk is well designed for processing large files and testing logic. It is likely that your all/most of your processing could be done in one awk process. If you're processing large amounts of data, the time savings can be significant.
I hope this helps.
as per the info provided by you , this is not related to any arithmetic operation.
treat it as string . find decimal point and remove it . that's what i understand
http://www.cyberciti.biz/faq/unix-linux-replace-string-words-in-many-files/
http://www.thegeekstuff.com/2010/07/bash-string-manipulation/