I have a bash script which is pretty simple (or so I thought - but I don't write them very often):
cp -f /mnt/storage/vhosts/domain1.COM/private/auditbaseline.php /mnt/storage/vhosts/domain1.COM/httpdocs/modules/mod_monitor/tmpl/audit.php
cp -f /mnt/storage/vhosts/domain1.COM/private/auditbaseline.php /mnt/storage/vhosts/domain2.org/httpdocs/modules/mod_monitor/tmpl/audit.php
The script copies the contents of auditbaseline to both domain 1 and domain 2.
For some reason it won't work. When I have the first line in on its own it's okay but when I add the second line I can't get it to work it locks up the scripts and they can't be accessed.
Any help would be really appreciated.
Did you perhaps create this script on a Windows machine? You should make sure that there are no CRLF line breaks in the file. Try using dos2unix (http://www.linuxcommand.org/man_pages/dos2unix1.html) to convert the file in that case.
Related
Hello developer friends,
I am looking to create a Shell script to modify several configuration files at several different paths.
For example; in /etc/nginx create a .bck file of the nginx.conf file and in the .conf file, replace the value "/etc/nginx/nginx-cloudflare.conf" with "/etc/nginx/nginx-cloudflare-2022.conf"
This manipulation would have to be done on several files and I would like to automate it as much as possible.
Do you have a script with an easy way to do it?
According to my research, it would be necessary to make a loop of conditions and the use of sed
I don't really know how it works, so I'm turning to you.
I cannot comment yet due to reputation, but I was going to suggest exactly what you were thinking: create a bash shell .sh script https://www.w3schools.io/terminal/bash-tutorials/, make it executable with chmod +x filename.sh so you can run it like ./filename.sh, and within it you can use sed https://www.man7.org/linux/man-pages/man1/sed.1.html in an in-place fashion --in-place[=SUFFIX] that also creates backups of said files. Sed search replace format is 's/search/replace/flags'.
I'm trying to run several sets of commands in parallel on a few remote hosts.
I've created a script that constructs these commands, and then writes the output in a local file, something along the lines of:
ssh <me>#<ip1> "command" 2> ./path/to/file/newFile1.txt & ssh <me>#<ip2>
"command" 2> ./path/to/file/newFile2.txt & ssh <me>#<ip2> "command" 2>
./path/to/file/newFile3.txt; ...(same repeats itself, with new commands and new
file names)...
My issue is that, when my script runs these commands, I am getting the following errors:
bash: ./path/to/file/newFile1.txt: No such file or directory
bash: ./path/to/file/newFile2.txt: No such file or directory
bash: ./path/to/file/newFile3.txt: No such file or directory
...
These files do NOT exist but will be written. That being said, the directory paths are valid.
The strange thing is that, if I copy and paste the whole big command, then it works without any issue. I'd rather have it automated tho ;).
Any ideas?
Edit - more information:
My filesystem is the following:
- home
- User
- Desktop
- Servers
- Outputs
- ...
I am running the bash script from home/User/Desktop/Servers.
The script creates the commands that need to be run on the remote servers. First thing first, the script creates the directories where the files will be stored.
outputFolder="./Outputs"
...
mkdir -p ${outputFolder}/f{fileNumb}
...
The script then continues to create the commands that will be called on remotes hosts, and their respective outputs will be placed in the created directories.
The directories are there. Running the commands gives me the errors, however printing and then copying the commands into the same location works for some reason. I have also tried to give the full path to directory, still same issue.
Hope I've been a bit clearer.
If this is the exact error message you get:
bash: ./path/to/file/newFile1.txt: No such file or directory
Then you'll note that there's an extra space between the colon and the dot, so it's actually trying to open a file called " ./path/to/file/newFile1.txt" (without the quotes).
However, to accomplish that, you'd need to use quotes around the filename in the redirection, as in
something ... 2> " ./path/to/file/newFile1.txt"
Or the first character would have to something else than a regular space. A non-breaking space perhaps, possible something that some editor might create if you hit alt-space or such.
I don't believe you've shown enough to correctly answer the question.
This doesn't look like a problem with ssh, but the way you are calling the (ssh) commands.
You say that you are writing the commands into a file... presumably you are then running that file as a script. Could you show the code you use to do that. I believe that's your problem.
I suspect you have made a false assumption about the way the working directory changes when you run a script. It doesn't. You are listing relative paths, so its important to know what they are relative to. That is the most likely reason for it working when you copy and paste it... You are executing from a different working directory.
I am new to bash scripting and was building my script based on another one I had seen. I was "running" the command by simply calling the variable where the command was stored:
$cmd
Solved by using:
eval $cmd
instead. My bad, should have given the full script from the start.
I would like to delete some folder on an Ubuntu 8.04 Server.
I would like to start a script to delete this folder.
I start an ssh session to the server.
My script looks like this:
#!/bin/bash
rm -r /var/lib/backuppc/pc/PC1/
rm -r /var/lib/backuppc/pc/PC2/
I run the script like this:
sh scriptname.sh
But I get this message:
rm: cannot remove `/var/lib/backuppc/pc/PC1/\r': No such file or directory
rm: cannot remove `/var/lib/backuppc/pc/PC1/\r': No such file or directory
I'm sorry but I don't use ever a shell script on linux.
I think it my fault because I don't know the basics :-(
Can somebody help me? I've to delete ~80 folder... :-(
It looks like there is some "junk" characters after your folder name (namely, \r). To be sure, type cat -A scriptname.sh and check if you can see some weird characters in the end of the lines. If so, I think the easiest thing for you (since you have few lines) is to manually delete the ending of those lines and re-type again. (I'm talking about the last two or three characters only)
Type cat -A scriptname.sh and see if the characters disappeared. If so, you should be good to go with your code.
I think this question fall under pipes, am bad at it.
Using one of my shell script, a file is generated with millions of rows.
Before I can use it with another command, I need to edit this file. I need to add a text e.g 'txt' in front of every line.
What i am currently doing now is,
-exit the shell script after file is generated
-open it in vim
-use command :g/^/s//txt/g to add txt at start of each line
-save file
-use it in remaining shell script
I am sure there would be a more efficient way, which i am not aware of. thanks for the help.
As some people said in the comments, you can use GNU sed to do that:
sed -i 's/^/txt/' yourfile.txt
The -i stands for --in-place and edit your file instead of printing to stdout.
UPDATE: this is a repost of How to make shell scripts robust to source being changed as they run
This is a little thing that bothers me every now and then:
I write a shell script (bash) for a quick and dirty job
I run the script, and it runs for quite a while
While it's running, I edit a few lines in the script, configuring it for a different job
But the first process is still reading the same script file and gets all screwed up.
Apparently, the script is interpreted by loading each line from the file as it is needed. Is there some way that I can have the script indicate to the shell that the entire script file should be read into memory all at once? For example, Perl scripts seem to do this: editing the code file does not affect a process that's currently interpreting it (because it's initially parsed/compiled?).
I understand that there are many ways I could get around this problem. For example, I could try something like:
cat script.sh | sh
or
sh -c "`cat script.sh`"
... although those might not work correctly if the script file is large and there are limits on the size of stream buffers and command-line arguments. I could also write an auxiliary wrapper that copies a script file to a locked temporary file and then executes it, but that doesn't seem very portable.
So I was hoping for the simplest solution that would involve modifications only to the script, not the way in which it is invoked. Can I just add a line or two at the start of the script? I don't know if such a solution exists, but I'm guessing it might make use of the $0 variable...
The best answer I've found is a very slight variation on the solutions offered to How to make shell scripts robust to source being changed as they run. Thanks to camh for noting the repost!
#!/bin/sh
{
# Your stuff goes here
exit
}
This ensures that all of your code is parsed initially; note that the 'exit' is critical to ensuring that the file isn't accessed later to see if there are additional lines to interpret. Also, as noted on the previous post, this isn't a guarantee that other scripts called by your script will be safe.
Thanks everyone for the help!
Use an editor that doesn't modify the existing file, and instead creates a new file then replaces the old file. For example, using :set writebackup backupcopy=no in Vim.
How about a solution to how you edit it.
If the script is running, before editing it, do this:
mv script script-old
cp script-old script
rm script-old
Since the shell keep's the file open as long as you don't change the contents of the open inode everything will work okay.
The above works because mv will preserve the old inode while cp will create a new one. Since a file's contents will not actually be removed if it is opened, you can remove it right away and it will be cleaned up once the shell closes the file.
According to the bash documentation if instead of
#!/bin/bash
body of script
you try
#!/bin/bash
script=$(cat <<'SETVAR'
body of script
SETVAR)
eval "$script"
then I think you will be in business.
Consider creating a new bang path for your quick-and-dirty jobs. If you start your scripts with:
#!/usr/local/fastbash
or something, then you can write a fastbash wrapper that uses one of the methods you mentioned. For portability, one can just create a symlink from fastbash to bash, or have a comment in the script saying one can replace fastbash with bash.
If you use Emacs, try M-x customize-variable break-hardlink-on-save. Setting this variable will tell Emacs to write to a temp file and then rename the temp file over the original instead of editing the original file directly. This should allow the running instance to keep its unmodified version while you save the new version.
Presumably, other semi-intelligent editors would have similar options.
A self contained way to make a script resistant to this problem is to have the script copy and re-execute itself like this:
#!/bin/bash
if [[ $0 != /tmp/copy-* ]] ; then
rm -f /tmp/copy-$$
cp $0 /tmp/copy-$$
exec /tmp/copy-$$ "$#"
echo "error copying and execing script"
exit 1
fi
rm $0
# rest of script...
(This will not work if the original script begins with the characters /tmp/copy-)
(This is inspired by R Samuel Klatchko's answer)