Update var in ini file using bash - bash

I am attempting to write a bash script to configure various aspects of a server. The context here is replacing a value of a variable in a conf file (ini format) with another value.
The context is
[ssh-iptables]
enabled = false
And I simply need to change false to true.
Typically I'd just do this with a simple bit of sed
sed -i 's/^enabled = false/enabled = true/g' /etc/fail2ban/jail.conf
But enabled = false exists in multiple places.
I've tried using awk with no success
awk -F ":| " -v v1="true" -v opt="enabled" '$1 == "[ssh-iptables]" && !f {f=1}f && $1 == opt{sub("=.*","= "v1);f=0}1' /etc/fail2ban/jail.conf
The above was sourced from this forum thread but I don't really have enough understanding of how to use it in scripts to make it work. All it seems to do is the equivalent of cat /etc/fail2ban/jail.conf
I have found a few other scripts which are considerably longer which isn't ideal as this will happen to loads of ini files so I'm hoping someone can help me correct the above code or point me in the right direction.
Apologies if this belongs on ServerFault, but as it's scripting rather than the intricacies of server configuration itself I figured here might be more apt.

Assuming your format is that there are no square-bracket lines (like [ssh-iptables]) within sections, I would use your solution above (with sed) but restrict the query to within that block like so:
sed -i '/^\[ssh-iptables\]$/,/^\[/ s/^enabled = false/enabled = true/' /etc/fail2ban/jail.conf
The extra part at the beginning tells the following substitution statement to only run between the line that is [ssh-iptables] and the next one that starts with a [. It uses two regular expressions separated by a comma which indicate the bounds.

If you are open to use external applications, you could be interested into the use of crudini.
Example:
[oauth2provider]
module = SippoServiceOAuth2Provider
backend[] = none
wiface = public
; [calldirection]
; module = SippoServiceCallDirection
; backend[] = none
; wiface = internal
A standard grep will not filter commented exceptions.
With crudini things for consulting, setting and modify are easier:
$ crudini --get /myproject/config/main.ini oauth2provider wiface
public
$ crudini --get /myproject/config/main.ini calldirection wiface
Section not found: calldirection
I was on a bash-only app and moved to this approach. Just a suggestion.
Regards,

You might consider using m4 instead of sed in this case. This uses variable replacement and I think keeps the file looking readable. Your m4 template might look like this:
[ssh-iptables]
enabled=SSH_IPTABLES_ENABLED
Now, you call m4 with the following parameters (which can be called from a bash script):
m4 -DSSH_IPTABLES_ENABLED=true input.m4 > output.ini
or:
m4 -DSSH_IPTABLES_ENABLED=false input.m4 > output.ini
This is an overly simple way of using m4, if you read about it you'll find you can do some really nifty things (this is the infrastructure upon which autoconf/automake was initially designed).

awk '/^[ssh-iptables/ {ok=1}
ok==1 && $0="enabled = false" {print " enabled = true"; ok=0 ; next}
{print $0} ' infile > tmp
mv tmp infile

Related

Bash - Read Directory Path From TXT, Append Executable, Then Execute

I am setting up a directory structure with many different R & bash scripts in it. They all will be referencing files and folders. Instead of hardcoding the paths I would like to have a text file where each script can search for a descriptor in the file (see below) and read the relevant path from that.
Getting the search-append to work in R is easy enough for me; I am having trouble getting it to work in Bash, since I don't know the language very well.
My guess is it has something to do with the way awk works / stores the variable, or maybe the way the / works on the awk output. But I'm not familiar enough with it and would really appreciate any help
Text File "Master_File.txt":
NOT_DIRECTORY "/file/paths/Fake"
JOB_TEST_DIRECTORY "/file/paths/Real"
ALSO_NOT_DIRECTORY "/file/paths/Fake"
Bash Script:
#! /bin/bash
master_file_name="Master_File.txt"
R_SCRIPT="RScript.R"
SRCPATH=$(awk '/JOB_TEST_DIRECTORY/ { print $2 }' $master_file_name)
Rscript --vanilla $SRCPATH/$R_SCRIPT
The last line, $SRCPATH/$R_SCRIPT, seems to be replacing part of SRCPath with the name of $R_SCRIPT which outputs something like /RScript.Rs/Real instead of what I would like, which is /file/paths/Real/RScript.R.
Note: if I hard code the path path="/file/paths/Real" then the code $path/$R_SCRIPT outputs what I want.
The R Script:
system(command = "echo \"SUCCESSFUL_RUN\"", intern = FALSE, wait = TRUE)
q("no")
Please let me know if there's any other info that would be helpful, I added everything I could think of. And thank you.
Edit Upon Answer:
I found two solutions.
Solution 1 - By Mheni:
[ see his answer below ]
Solution 2 - My Adaptation of Mheni's Answer:
After seeing a Mehni's note on ignoring the " quotation marks, I looked up some more stuff, and found out it's possible to change the character that awk used to determine where to separate the text. By adding a -F\" to the awk call, it successfully separates based on the " character.
The following works
#!/bin/bash
master_file_name="Master_File.txt"
R_SCRIPT="RScript.R"
SRCPATH=$(awk -F\" -v r_script=$R_SCRIPT '/JOB_TEST_DIRECTORY/ { print $2 }' $master_file_name)
Rscript --vanilla $SRCPATH/$R_SCRIPT
Thank you so much everyone that took the time to help me out. I really appreciate it.
the problem is because of the quotes around the path, this change to the awk command ignores them when printing the path.
there was also a space in the shebang line that shouldn't be there as #david mentioned
#!/bin/bash
master_file_name="/tmp/data"
R_SCRIPT="RScript.R"
SRCPATH=$(awk '/JOB_TEST_DIRECTORY/ { if(NR==2) { gsub("\"",""); print $2 } }' "$master_file_name")
echo "$SRCPATH/$R_SCRIPT"
OUTPUT
[1] "Hello World!"
in my example the paths are in /tmp/data
NOT_DIRECTORY "/tmp/file/paths/Fake"
JOB_TEST_DIRECTORY "/tmp/file/paths/Real"
ALSO_NOT_DIRECTORY "/tmp/file/paths/Fake"
and in the path that corresponds to JOB_TEST_DIRECTORY i have a simple hello_world R script
[user#host tmp]$ cat /tmp/file/paths/Real/RScript.R
print("Hello World!")
I would use
Master_File.txt :
NOT_DIRECTORY="/file/paths/Fake"
JOB_TEST_DIRECTORY="/file/paths/Real"
ALSO_NOT_DIRECTORY="/file/paths/Fake"
Bash Script:
#!/bin/bash
R_SCRIPT="RScript.R"
if [[ -r /path/to/Master_File.txt ]]; then
. /path/to/Master_File.txt
else
echo "ERROR -- Can't read Master_File"
exit
fi
Rscript --vanilla $JOB_TEST_DIRECTORY/$R_SCRIPT
Basically, you create a configuration file Key=value, source it then use the the keys as variable for whatever you need throughout the script.

Multiline grep with specific text

There is an xml file with lot of <A_tag>-s in it.
I need to see those A tags (and their children, so the tags' whole content) that have at least one <C_tag>.
So this block should match (therefore contained in the result):
<A_tag>
...
...
<C_tag attr1="" ... attrn="" />
...
</A_tag>
I tried using pcregrep, but I don't know how to tell any block ending, that is longer than 1 character (and </A_tag> is longer than that, but for instance [^>] regexp would be easy for me too).
I also tried awk, but couldn't manage the goal with it either.
If someone experienced would help me, please make your command separate the found blocks with an empty line too, with that I could learn more.
Following up on the xmllint comment:
xmllint --xpath '(//A_tag/C_tag/..)' x.xml
Will look for C_TAG under A_TAG, and then display the parent A_TAG.
Output:
<A_tag>
<C_tag attr1="" attrn=""/>
</A_tag>
Yeah, well in my case, this was the solution:
xmllint --shell x.xml <<< 'xpath //A_tag//C_tag/ancestor::A_tag'
It's because my xmllint version doesn't support --xpath option.
Also, C_tag could be any descendant of A_tag, not just direct child (which I didn't clarify in question).
However, the answer of dash-o seems to be correct.
My only problem is that this xml file I'm working with contains 4.5 million lines, where xmllint turned out to be slow - as it does parse the file.
If you have a more general solution that works with awk or pcregrep, please share with me. They would be good here as they just work with patterns.
Otherwise I'll accept the original answer tomorrow.
If the file is pretty-printed (or follow similar rules), possible to write small awk script, and only acts on the a_tag and c_tag lines:
awk '
/<A_tag>/ { in_a=$0 ; c="" ; next }
in_a { in_a = in_a RS $0}
/<C_tag/ { c=$0 ; next }
/<\/A_tag>/ { if ( in_a && c ) { print in_a ; in_a="" ; c=""} }
' x.xml

Use sed (or awk) to add entry in fstab when match found

I have a need to edit lines in /etc/fstab and add/change fsoptions to lines where a matching volume is found. I have tried using sed and putting found blocks into registers to place them back, but am finding this one a challenge. Perhaps sed is not the best tool - I tried augeas and while it appends it did not replace and also failed if a matching line wasn't found but needed to be added (eg was only visible using the 'mount' command such as /dev/shm).
for example, if /etc/fstab has a line:
/dev/mapper/VolGroup1-tmp /tmp xfs dev,nosuid 0 0
I want to make it
/dev/mapper/VolGroup1-tmp /tmp xfs nodev,nosuid,noexec 0 0
Note that the volume group in the first block could be any name
The KEYWORD in the string would (in this example) be /tmp (but careful not to match /var/tmp unless specified)
The filesystem could be anything (not necessarily xfs)
Any 'exec' or 'suid' present (for example) needs to be replaced if present and even if not, 'noexec' or 'nosuid' inserted.
The trailing '0 0' needs to be retained.
I'm sure I am missing an easy way to do this. Using 'mount -o remount,noexec /tmp' doesn't write to /etc/fstab so I guess the only way to make changes persistent is to edit /etc/fstab directly?
I am actually going to wrap the solution in Puppet. The augeas example below fails in 2 regards:
if a line (eg /dev/shm) does not exist in /etc/fstab, it fails
if a line exists and has 'exec' it appends 'noexec' but leaves the exec also
augeas{ "/etc/fstab - ${opt} on ${mount}":
context => '/files/etc/fstab',
changes => [
"ins opt after /files/etc/fstab/*[file = '${mount}']/opt[last()]",
"set *[file = '${mount}']/opt[last()] ${opt}",
],
onlyif => "match *[file = '${mount}']/opt[. = '${opt}'] size == 0",
notify => Exec["remount_${mount}_${opt}"],
}
I'm sure I am missing an easy way to do this. Using 'mount -o
remount,noexec /tmp' doesn't write to /etc/fstab so I guess the only
way to make changes persistent is to edit /etc/fstab directly?
There is no for-purpose CLI for modifying mount records in /etc/fstab, if that's what you mean. Editing the file manually with a text editor is the old-school way.
As I mentioned in comments, the standard Puppet approach for working with mount records is via Mount resources. As long as I'm writing an answer, I repeat that using these is the way you should be going about the job.
You objected in comments that
the issue is that the first string (fsname) could be anything, as could the fstype. I also want to preserve existing fsoptions that don't clash with new desired settings.
That is the issue, but not in the way I think you mean. You are in a tight spot because you are trying to accommodate multiple authorities over this aspect of your nodes' configurations. The best practice would be to give the responsibility over the mounts of interest wholly over to Puppet. It has more than enough flexibility to provide for different mount configurations on different machines.
However, if you are determined to use Puppet in this way, then it can be done. But although the task is relatively easy to describe, the particulars make it relatively complex. A sed-based approach is possible, but would be comparatively lengthy and extremely cryptic. A better command-line tool for the main job would be awk, and in particular, this awk script will do the job for the case you've presented:
$1 ~ /^#.*/ || $2 != "/tmp" {print; next}
$4 ~ /.*nosuid.*/ && $4 ~ /.*noexec.*/ {print; next}
{
split($4, opts, ",")
printf "%s %s %s ", $1, $2, $3
for (i in opts) {
if (opts[i] !~ /(no)?(exec|suid)/) {
printf "%s,", opts[i]
}
}
printf "noexec,nosuid %s %s\n", $5, $6
}
Wrapping that into an Exec resource and making any other adaptations to your specific requirements are left as an exercise.

Counting char in word with different delimiter

I am writing a shell script, in which I get the location of java via which java. As response I get (for example)
/usr/pi/java7_32/jre/bin/java.
I need the path to be cut so it ends with /jre/, more specificly
/usr/pi/java7_32/jre/
as the programm this information is provided to can not handle the longe path to work.
I have used cut with the / as delimiter and as I thought that the directory of the Java installation is always the same, therfore a
cut -d'/' -f1-5
worked just fine to get this result:
/usr/pi/java7_32/jre/
But as the java could be installed somewhere else aswell, for example at
/usr/java8_64/jre/
the statement would not work correctly.
I need tried sed, awk, cut and different combinations of them but found no answer I liked.
As the title says I would count the number of appereance of the car / until the substing jre/ is found under the premisse that the shell counts from the left to the right.
The incremented number would be the the field I want to see by cutting with the delimiter.
path=$(which java) # example: /usr/pi/java7_32/jre/bin/java
i=0
#while loop with a statment which would go through path
while substring != jre/ {
if (char = '/')
i++
}
#cut the path
path=$path | cut -d'/' -f 1-i
#/usr/pi/java7_32/jre result
Problem is the eventual difference in the path before and after
/java7_64/jre/, like */java*/jre/
I am open for any ideas and solutions, thanks a lot!
Greets
Jan
You can use the shell's built-in parameter operations to get what you need. (This will save the need to create other processes to extract the information you need).
jpath="$(which java)"
# jpath now /usr/pi/java7_32/jre/bin/java
echo ${jpath%jre*}jre
produces
/usr/pi/java7_32/jre
The same works for
jpath=/usr/java8_64/jre/
The % indicates remove from the right side of the string the matching shell reg-ex pattern. Then we just put back jre to have your required path.
You can overwrite the value from which java
jpath=${jpath%jre*}jre
IHTH
You can get the results with grep:
path=$(echo $path | grep -o ".*/jre/")

Replace last line of XML file

Looking for help creating a script that will replace the last line of an XML file with a tag. I have a few hundred files so I'm looking for something that will process them in a loop. I've managed to rename the files sequentially like this:
posts1.xml
posts2.xml
posts3.xml
etc...
to make it easier to loop through. But I have no idea how to write a script to do this. I'm open to using either Linux or Windows (but i would guess that Linux is better for this kind of task).
So if you want to append a line to every file:
sed -i '$a<YOUR_SHINY_NEW_TAG>' *xml
To replace the last line:
sed -i '$s/.*/<YOUR_SHINY_NEW_TAG>/' *xml
But do note, sed is not the ideal tool to modify xml.
XMLStarlet is a command-line toolkit for performing XML parsing and manipulations. Note that as an XML-aware toolkit, it'll respect XML structure, character encoding and entity substitution.
Check out the ed command to see how to modify documents. You can wrap this in a standard bash loop.
e.g. in a doc consisting of a chain of <elem>s, you can add a following <added>5</added>:
mkdir new
for x in *.xml; do
xmlstarlet ed -a "//elem[count(//elem)]" -t elem -n added -v 5 $x > new/$x
done
Linux way using sed:
To edit the last line of the file in place, you can use sed:
sed -i '$s_pattern_replacement_' filename
To change the whole line to "replacement" use $s_.*_replacement_. Be sure to escape any _'s in replacement with a \.
To loop over files, just use for:
for f in /path/posts*.xml; do sed -i '$s_.*_replacement_' $f; done
This, however, is a dirty way as it's not aware of the XML structure, whereas the XML structure is not affected by newlines. You have to be sure the last line of the files contains exactly what you expect it to.
It makes little to no difference whether you're on Linux, Windows or MacOS
The question is what language do you want to use?
The following is an example in c# (not optimized, but read it as speudocode):
string rootDirectory = #"c:\myfiles";
var files = Directory.GetFiles(rootDirectory, "*.xml");
foreach (var file in files)
{
var lines = File.ReadAllLines(file);
lines[lines.Length - 1] = "whatever you want here";
File.WriteAllLines(file, lines);
}
You can compile this and run it on Windows, Linux, etc..
Or you could do the same in Python.
Of course this method does not actually parse the XML,
but you just wanted to replace the last line right?

Resources