Hello developer friends,
I am looking to create a Shell script to modify several configuration files at several different paths.
For example; in /etc/nginx create a .bck file of the nginx.conf file and in the .conf file, replace the value "/etc/nginx/nginx-cloudflare.conf" with "/etc/nginx/nginx-cloudflare-2022.conf"
This manipulation would have to be done on several files and I would like to automate it as much as possible.
Do you have a script with an easy way to do it?
According to my research, it would be necessary to make a loop of conditions and the use of sed
I don't really know how it works, so I'm turning to you.
I cannot comment yet due to reputation, but I was going to suggest exactly what you were thinking: create a bash shell .sh script https://www.w3schools.io/terminal/bash-tutorials/, make it executable with chmod +x filename.sh so you can run it like ./filename.sh, and within it you can use sed https://www.man7.org/linux/man-pages/man1/sed.1.html in an in-place fashion --in-place[=SUFFIX] that also creates backups of said files. Sed search replace format is 's/search/replace/flags'.
Related
My bash shell requires a temp file. Suppose filename conflict is not an issue, can I say mktemp is not as good as manually touch a temp file after umask 066?
My assumption is:
mktemp is a system function, compared to manually touch a file, it still takes a little bit more resource.
I've read something about ln -s etc/passwd attack, but it looks like a story decades ago when passwords were not shadowed.
Please correct me if my understanding is wrong.
Those two commands are not destined to do the same thing. mktemp creates a file in a flexible way, and has features to make sure it uses a unique name. touch will modify the timestamp of a file (or create it if it does not exist), but you supply the name.
If you want to create an empty file for which you already have a name, then use touch ; if you are going to write to that file right after, you do not need to create it first, just redirect to it.
But if you really need to make a temporary file and ensure you will not overwrite any other file, touch does nothing for you. It is "lighter", maybe, but useless in this case, and you need mktemp.
The mktemp command was written by Todd C. Miller of OpenBSD to prevent common vulnerabilities in shell scripts. In his own words:
Mktemp is a simple utility designed to make temporary file handling in
shells scripts be safe and simple. Traditionally, people writing
shell scripts have used constructs like:
TFILE=/tmp/foop.$$
which are trivial to attack. If such a script is run as root it may
be possible for an attacker on the local host to gain access to the
root login, corrupt or unlink system files, or do a variety of other
nasty things.
The basic problem is that most shells have no equivalent to open(2)'s
O_EXCL flag. While it is possible to avoid this using temporary
directories, I consider the use of mktemp(1) to be superior both in
terms of simplicity and robustness.
Shadow passwords do not help here. If the script is run a root and it writes to a temporary file in an insecure way, then an attacker could possibly exploit race conditions to modify /etc/password or /etc/shadow or both!
I have a bash script which is pretty simple (or so I thought - but I don't write them very often):
cp -f /mnt/storage/vhosts/domain1.COM/private/auditbaseline.php /mnt/storage/vhosts/domain1.COM/httpdocs/modules/mod_monitor/tmpl/audit.php
cp -f /mnt/storage/vhosts/domain1.COM/private/auditbaseline.php /mnt/storage/vhosts/domain2.org/httpdocs/modules/mod_monitor/tmpl/audit.php
The script copies the contents of auditbaseline to both domain 1 and domain 2.
For some reason it won't work. When I have the first line in on its own it's okay but when I add the second line I can't get it to work it locks up the scripts and they can't be accessed.
Any help would be really appreciated.
Did you perhaps create this script on a Windows machine? You should make sure that there are no CRLF line breaks in the file. Try using dos2unix (http://www.linuxcommand.org/man_pages/dos2unix1.html) to convert the file in that case.
This question has been posted here many times, but it never seems to answer my question.
I have two scripts. The first one contains one or multiple variables, the second script needs those variables. The second script also needs to be able to change the variables in the first script.
I'm not interested in sourcing (where the first script containing the variables runs the second script) or exporting (using environment variables). I just simply want to make sure that the second script can read and change (get and set) the variables available in the first script.
(PS. If I misunderstood how sourcing or exporting works, and it applies to my scenario, please let me know. I'm not completely closed to those methods, after what I've read, I just don't think those things will do what I want)
Environment variables are per process. One process can not modify the variables in another. What you're asking for is not possible.
The usual workaround for scripts is sourcing, which works by running both scripts in the same shell process, but you say you don't want to do that.
I've also given this some thought. I would use files as variables. For example in script 1 you use for writing variable values to files:
echo $varnum1 > /home/username/scriptdir/vars/varnum1
echo $varnum2 > /home/username/scriptdir/vars/varnum2
And in script 2 you use for reading values from files back into variables:
$varnum1=$(cat /home/username/scriptdir/vars/varnum1)
$varnum2=$(cat /home/username/scriptdir/vars/varnum2)
Both scripts can read or write to the variables at any given time. Theoretically two scripts can try to access the same file at the same time, I'm not sure what exactly would happen but since each file only contains one value, the time to read or write should be extremely short.
In order to even reduce those times you can use a ramdisk.
I think this is much better than scripts editing each other (yuk!). Live editing of scripts can mess up scripts and only works when you initiate the script again after the edit was made.
Good luck!
So after a long search on the web and a lot of trying, I finally found some kind of a solution. Actually, it's quite simple.
There are some prerequisites though.
The variable you want to set already has to exist in the file you're trying to set it in (I'm guessing the variable can be created as well when it doesn't exist yet, but that's not what I'm going for here).
The file you're trying to set the variable in has to exist (obviously. I'm guessing again this can be done as well, but again, not what I'm going for).
Write
sudo sed -i 's/^\(VARNAME=\).*/\1VALUE/' FILENAME
So i.e. setting the variable called Var1 to the value 5, in the file
test.ini:
sudo sed -i 's/^\(Var1=\).*/\15/' test.ini
Read
sudo grep -Po '(?<=VARNAME=).*' FILENAME
So i.e. reading the variable called Var1 from the file test.ini
sudo grep -Po '(?<=Var1=).*' test.ini
Just to be sure
I've noticed some issues when running the script that sets variables from a different folder than the one where your script is located.
To make sure this always go right, you can do one of two things:
sudo sed -i 's/^\(VARNAME=\).*/\1VALUE/' `dirname $0`/FILENAME
So basically, just put `dirname $0`/ (including the backticks) in front of the filename.
The other option is to make `dirname $0`/ a variable (again including the backticks), which would look like this.
my_dir=`dirname $0`
sudo sed -i 's/^\(VARNAME=\).*/\1VALUE/' $my_dir/FILENAME
So basically, if you've got a file named test.ini, which contains this line: Var1= (In my tests, the variable can start empty, and you will still be able to set it. Mileage may vary.), you will be able to set and get the value for Var1
I can confirm that this works (for me), but since you all, with way more experience in scripting then me, didn't come up with this, I'm guessing this is not a great way to do it.
Also, I couldn't tell you the first thing about what's happening in those commands above, I only know they work.
So if I'm doing something stupid, or if you can explain to me what's happening in the commands above, please let me know. I'm very curious to find out what you guys think if this solution.
I have a default conf file that we use over and over again in our projects. And, for each project, the file has to be modified. On more than one occasion, the person editing the conf file made time consuming mistakes.
So, I wanted to write a shell script that can be called to modify the conf file.
But, being new to shell scripts, I don't know how to do this. What is the appropriate *nix tool to open a text file, find a string, replace it with another and then close the text file.
Thanks!
Eric
As noted by other commenters, sed, is the typical tool.
Here's an example of an in-place (the -i option) edit of a file:
sed -i 's/Release Two/Testing Beta/g' /path/to/file.txt
You're replacing instances of the first string, Release Two, with Testing Beta everywhere in the files. The leading s says search/replace and the trailing g says do it as many times as it can be found (the default is to do it just once.) If you want to make a backup you can call
sed -iBACKUP_SUFFIX ...
You should have a look at the sed command. It allows to edit a stream (a file for example) so you can substitute, insert, remove text.
http://www.grymoire.com/Unix/Sed.html
sed
UPDATE: this is a repost of How to make shell scripts robust to source being changed as they run
This is a little thing that bothers me every now and then:
I write a shell script (bash) for a quick and dirty job
I run the script, and it runs for quite a while
While it's running, I edit a few lines in the script, configuring it for a different job
But the first process is still reading the same script file and gets all screwed up.
Apparently, the script is interpreted by loading each line from the file as it is needed. Is there some way that I can have the script indicate to the shell that the entire script file should be read into memory all at once? For example, Perl scripts seem to do this: editing the code file does not affect a process that's currently interpreting it (because it's initially parsed/compiled?).
I understand that there are many ways I could get around this problem. For example, I could try something like:
cat script.sh | sh
or
sh -c "`cat script.sh`"
... although those might not work correctly if the script file is large and there are limits on the size of stream buffers and command-line arguments. I could also write an auxiliary wrapper that copies a script file to a locked temporary file and then executes it, but that doesn't seem very portable.
So I was hoping for the simplest solution that would involve modifications only to the script, not the way in which it is invoked. Can I just add a line or two at the start of the script? I don't know if such a solution exists, but I'm guessing it might make use of the $0 variable...
The best answer I've found is a very slight variation on the solutions offered to How to make shell scripts robust to source being changed as they run. Thanks to camh for noting the repost!
#!/bin/sh
{
# Your stuff goes here
exit
}
This ensures that all of your code is parsed initially; note that the 'exit' is critical to ensuring that the file isn't accessed later to see if there are additional lines to interpret. Also, as noted on the previous post, this isn't a guarantee that other scripts called by your script will be safe.
Thanks everyone for the help!
Use an editor that doesn't modify the existing file, and instead creates a new file then replaces the old file. For example, using :set writebackup backupcopy=no in Vim.
How about a solution to how you edit it.
If the script is running, before editing it, do this:
mv script script-old
cp script-old script
rm script-old
Since the shell keep's the file open as long as you don't change the contents of the open inode everything will work okay.
The above works because mv will preserve the old inode while cp will create a new one. Since a file's contents will not actually be removed if it is opened, you can remove it right away and it will be cleaned up once the shell closes the file.
According to the bash documentation if instead of
#!/bin/bash
body of script
you try
#!/bin/bash
script=$(cat <<'SETVAR'
body of script
SETVAR)
eval "$script"
then I think you will be in business.
Consider creating a new bang path for your quick-and-dirty jobs. If you start your scripts with:
#!/usr/local/fastbash
or something, then you can write a fastbash wrapper that uses one of the methods you mentioned. For portability, one can just create a symlink from fastbash to bash, or have a comment in the script saying one can replace fastbash with bash.
If you use Emacs, try M-x customize-variable break-hardlink-on-save. Setting this variable will tell Emacs to write to a temp file and then rename the temp file over the original instead of editing the original file directly. This should allow the running instance to keep its unmodified version while you save the new version.
Presumably, other semi-intelligent editors would have similar options.
A self contained way to make a script resistant to this problem is to have the script copy and re-execute itself like this:
#!/bin/bash
if [[ $0 != /tmp/copy-* ]] ; then
rm -f /tmp/copy-$$
cp $0 /tmp/copy-$$
exec /tmp/copy-$$ "$#"
echo "error copying and execing script"
exit 1
fi
rm $0
# rest of script...
(This will not work if the original script begins with the characters /tmp/copy-)
(This is inspired by R Samuel Klatchko's answer)