Could someone point me to any reference for reading values in an ini file with sections. Here is an example of the ini file
example.ini
[ section1 ]
[[ section 1a ]]
key=value1
[[ section 2a ]]
key=value2
[ section 2 ]
[[ section 1a ]]
key=value1
[[ section 2a ]]
key=value2
Hopefully there is something on the lines of:
x = readFile "example.ini"
print x.section1."section 1a".key
An option is to convert the ini to json/properties - and then read that - any pointers to such a freeware conversion utility (windows/ubuntu) might also work.
unfortunetly you have to write your own parser for this reason, though it is not that hard task to acomplish , but it is as it is.
and cause in pipeline you can use groovy and create methods, that will be easy to achieve
Wrote a simple python script using configobj that is called from the pipeline, posting sample here:
from configobj import ConfigObj
def convertIni2Properties(conf, sectionPrefix=''):
if not conf.sections:
for scalar in conf.scalars:
print "%s.%s = %s\n" % (sectionPrefix, scalar, conf[('%s' % scalar)]))
for section in conf.sections:
convertIni2Properties(conf['%s' % section], '%s.%s' % (sectionPrefix, section))
conf = ConfigObj('myfile.ini')
convertIni2Properties(conf)
My ini file is a few hundred lines and using recursive calling works, didn't bother about changing it to iterative.
Related
I have a project with bash, python, and C files where I would like to simply print the contents of the bash file in the doxygen documentation.
I am using Doxygen 1.9.4 and my Doxyfile has the following modified settings:
JAVADOC_AUTOBRIEF = YES
PYTHON_DOCSTRING = NO
OPTIMIZE_OUTPUT_JAVA = YES
EXTENSION_MAPPING = sh=C
EXTRACT_ALL = YES
SORT_BRIEF_DOCS = YES
INPUT = .
FILTER_PATTERNS = *.sh="sh_filter"
FILE_PATTERNS += *.sh
The sh_filter file has the following contents
#!/bin/bash
echo "\verbatim"
echo "$(cat $1)"
echo "\endverbatim"
After running doxygen, there is nothing that appears in the file reference for the bash file that is within the working directory.
How can I get the file contents to be printed verbatim?
A bit long for a comment. The construction as requested is quite unusual as normally the purpose is to document routines / variables / classes / namespaces etc and support this with code. In this case the requirement is to just show the contents of the file.
The bests solution I found is to create the filter like:
#!/bin/bash
echo "/** \file"
echo "\details \verbinclude $1 */"
echo "$(cat $1)"
and have the following added to the doxygen settings file:
EXTENSION_MAPPING = sh=C
INPUT = .
FILTER_PATTERNS = *.sh="./sh_filter"
FILE_PATTERNS += *.sh
EXAMPLE_PATH = .
(where the different paths are depending on the local situation)
I'm trying to include a bash script in an AWS SSM Document, via the Terraform templatefile function. In the aws:runShellScript section of the SSM document, I have a Bash for loop with an # sign that seems to be creating an error during terraform validate.
Version of terraform: 0.13.5
Inside main.tf file:
resource "aws_ssm_document" "magical_document" {
name = "magical_ssm_doc"
document_type = "Command"
document_format = "YAML"
target_type = "/AWS::EC2::Instance"
content = templatefile(
"${path.module}/ssm-doc.yml",
{
Foo: var.foo
}
)
}
Inside my ssm-doc.yaml file, I loop through an array:
for i in "$\{arr[#]\}"; do
if test -f "$i" ; then
echo "[monitor://$i]" >> $f
echo "disabled=0" >> $f
echo "index=$INDEX" >> $f
fi
done
Error:
Error: Error in function call
Call to function "templatefile" failed:
./ssm-doc.yml:1,18-19: Invalid character;
This character is not used within the language., and 1 other diagnostic(s).
I tried escaping the # symbol, like \#, but it didn't help. How do I
Although the error is pointing to the # symbol as being the cause of the error, it's the ${ } that's causing the problem, because this is Terraform interpolation syntax, and it applies to templatefiles too. As the docs say:
The template syntax is the same as for string templates in the main Terraform language, including interpolation sequences delimited with ${ ... }.
And the way to escape interpolation syntax in Terraform is with a double dollar sign.
for i in "$${arr[#]}"; do
if test -f "$i" ; then
echo "[monitor://$i]" >> $f
echo "disabled=0" >> $f
echo "index=$INDEX" >> $f
fi
done
The interpolation syntax is useful with templatefile if you're trying to pass in an argument, such as, in the question Foo. This argument could be accessed within the yaml file as ${Foo}.
By the way, although this article didn't give the answer to this exact issue, it helped me get a deeper appreciation for all the work Terraform is doing to handle different languages via the templatefile function. It had some cool tricks for doing replacements to escape for different scenarios.
I have a large data file that contains many joint files.
It has an separate index file has that file name, start + end byte of each file within the data file.
I'm needing help in creating a bash script to split the large file into it's 1000's of sub files.
Data File : fileafilebfilec etc
Index File:
filename.png<0>3049
folder\filename2.png<3049>6136.
I guess this needs to loop through each line of the index file, then using dd to extract the relevant bytes into a file. Maybe a fiddly part might be the folder structure bracket being windows style rather than linux style.
Any help much appreciated.
while read p; do
q=${p#*<}
startbyte=${q%>*}
endbyte=${q#*>}
filename=${p%<*}
count=$(($endbyte - $startbyte))
toprint="processing $filename startbyte: $startbyte endbyte: $endbyte count: $c$
echo $toprint
done <indexfile
Worked it out :-) FYI:
while read p; do
#sort out variables
q=${p#*<}
startbyte=${q%>*}
endbyte=${q#*>}
filename=${p%<*}
count=$(($endbyte - $startbyte))
#let it know we're working
toprint="processing $filename startbyte: $startbyte endbyte: $endbyte count: $c$
echo $toprint
if [[ $filename == *"/"* ]]; then
echo "have found /"
directory=${filename%/*}
#if no directory exists, create it
if [ ! -d "$directory" ]; then
# Control will enter here if $directory doesn't exist.
echo "directory not found - creating one"
mkdir ~/etg/$directory
fi
fi
dd skip=$startbyte count=$count if=~/etg/largefile of=~/etg/$filename bs=1
done <indexfile
I created a bash script that parses ASCII files into a comma delimited output. It's worked great. Now, a new file layout for these files is being gradually introduced.
My script has now two parsing functions (one per layout) that I want to call depending on a specific marker that is present in the ASCII file header. The script is structured thusly:
#!/bin/bash
function parseNewfile() {...parse stuff...return stuff...}
function parseOldfile() {...parse stuff...return stuff...}
#loop thru ASCII files array
i=0
while [ $i -lt $len ]; do
#check if file contains marker for new layout
grep CSVHeaderBox output_$i.ASC
#calls parsing function based on exit code
if [ $? -eq 0 ]
then
CXD=`parseNewfile`
else
CXD=`parseOldfile`
fi
echo ${array[$i]}| awk -v cxd=`echo $CXD` ....
let i++
done>>${outdir}/outfile.csv
...
The script does not err out. It always calls the original function "parseOldfile" and ignores the new one. Even when I specifically feed my script with several files with the new layout.
What I am trying to do seem very trivial. What am I missing here?
EDIT: Samples of old and new file layouts.
1) OLD File Layout
F779250B
=====BOX INFORMATION=====
Model = R15-100
Man Date = 07/17/2002
BIST Version = 3.77
SW Version = 0x122D
SW Name = v1b1645
HW Version = 1.1
Receiver ID = 00089787556
=====DISK INFORMATION=====
....
2) NEW File Layout
F779250B
=====BOX INFORMATION=====
Model = HR22-100
Man Date = 07/17/2008
BIST Version = 7.55
SW Version = 0x066D
SW Name = v18m1fgu
HW Version = 2.3
Receiver ID = 028910170936
CSVHeaderBox:Platform,ManufactureDate,BISTVersion,SWVersion,SWName,HWRevision,RID
CSVValuesBox:HR22-100,20080717,7.55,0x66D,v18m1fgu,2.3,028910170936
=====DISK INFORMATION=====
....
This may not solve your problem, but a potential performance boost: instead of
grep CSVHeaderBox output_$i.ASC
#calls parsing function based on exit code
if [ $? -eq 0 ]
use
if grep -q CSVHeaderBox output_$i.ASC
qrep -q will exit successfully on the first match, so it doesn't have to scan the whole file. Plus you don't have to bother with the $? var.
Don't do this:
awk -v cxd=`echo $CXD`
Do this:
awk -v cxd="$CXD"
I'm not sure if this solves the OP's requirement.
What's the need for awk if your function knows how to parse the file?
#/bin/bash
function f1() {
echo "f1() says $#"
}
function f2() {
echo "f2() says $#"
}
FUN="f1"
${FUN} "foo"
FUN="f2"
${FUN} "bar"
I am bit embarrassed to write this but I solved my "problem".
After gedit (I am on Ubuntu) err-ed out several dozen times about "Trailing spaces", I copied and pasted my code into a new file and re-run my script.
It worked.
I have no explanation why.
Thanks to everyone for taking the time.
I've been working on our intro scripting assignment, and am having issues calling functions within the script. I am in the second portion of the assignment, and I am just testing to make sure what I have is (hopefully) going to work. I have gathered some directories, and ask a yes or no question. When I get a 'y', I wrote a little function that I call, and when I get a 'n' I have another function, both simple echoes. What is the issue?
part_two(){
answer=""
for value in "$#";do
echo "$value"
while [ "$answer" != "y" -a "$answer" != "n" ]
do
echo -n "Would you like to save the results to a file? (y/n): "
read answer
done
if [ "$answer" = "n" ]
then
part_six
elif [ "$answer" = "y" ]
then
part_five
fi
done
}
part_two $#
part_five(){
echo -n "working yes";
}
part_six(){
echo -n "working no";
}
Any help would be greatly appreciated, as always.
Much like in C a function must be defined before it's used. In your code snippet you are calling part_two (which is calling part_five and part_six) before declaring the two functions.
Have you tried moving their definitions to the start of the script?
EDIT:
In most cases, the best way to deal with this in Bash is to simply define all functions at the start of the script before executing any actual commands. The order of the definitions does not really matter - the shell only looks up a function when it's about to use it - so generally there are no dependency issues etc. that you may have to think about.
EDIT 2:
There are cases where you may not be able to just define a function at the start of the script. A common case is when you use conditional constructs to dynamically select or modify the declaration of a function e..g.:
if [[ "$1" = 0 ]]; then
function show() {
echo Zero
}
else
function show() {
echo Not-zero
}
fi
In these cases you have to make sure that each function call happens after that function (and any others that it calls) is declared.
EDIT 3:
In bash a function declaration is actually the function foo() { ... } block where you define its implementation - and yes, the function keyword is not strictly necessary. There are no function prototypes as in C - they would not make sense anyway because shell scripts are generally parsed as they are executed. Newer Bash version do read a script at once, but they mostly check for syntax errors and not for logical errors such as this one.
BTW the official term is "function declaration", but even the Bash info page uses "declaration" and "definition" interchangeably.