Bash script sudo and variables [duplicate] - bash

This question already has an answer here:
Bash foreach loop works differently when executed from .sh file [duplicate]
(1 answer)
Closed 4 years ago.
Totally new to Bash here, actually I've avoided it like a plague for 10 years.
Today, there is no way around it.
After a few hours of beating my head against the keyboard, I discovered that sudo and any bash variable in a command gets stripped out.
So I have something like
somescript.sh
for i in {1..5}
do
filename=somefilenumber"$i".txt
echo $filename
done
on the command line now if I run it
user#deb:~$ ./somescript.sh
I get the expected
somefilenumber1.txt
somefilenumber2.txt
somefilenumber3.txt
somefilenumber4.txt
somefilenumber5.txt
but if I run with sudo, like
user#deb:~$ sudo ./somescript.sh
I'll get this
somefilenumber{1..5}.txt
This is a huge problem because I'm trying to cp files and rm files in a loop with the variable.
So here is the code with cp and rm
for i in {1..10}
do
filename=somefilenumber"$i".txt
echo $filename
cp "$filename" "someotherfilename.txt"
rm "$filename"
done
I end up getting
cp: cannot stat 'somefilenumber{1..5}.txt': No such file or directory
rm: cannot remove 'somefilenumber{1..5}.txt': No such file or directory
I need to run sudo also because of other programs that require it.
Is there any way around this?
Even if nothing else require sudo, and I don't use it, the rm command will prompt me for every file if I'm sure that I want to remove it or not. The whole point is to not be sitting here tied to the computer while it runs through hundreds of files.

You could try to replace {1..10} with seq 1 10:
for i in `seq 1 10`
do
filename=somefilenumber"$i".txt
echo $filename
cp "$filename" "someotherfilename.txt"
rm "$filename"
done
Your problem sounds like the environment has something wrong for root, do you start the script with:
#!/bin/bash
?

Related

How to make folders for individual files within a directory via bash script?

So I've got a movie collection that's dumped into a single folder (I know, bad practice in retrospect.) I want to organize things a bit so I can use Radarr to grab all the appropriate metadata, but I need all the individual files in their own folders. I created the script below to try and automate the process a bit, but I get the following error.
Script
#! /bin/bash
for f in /the/path/to/files/* ;
do
[[ -d $f ]] && continue
mkdir "${f%.*}"
mv "$f" "${f%.*}"
done
EDIT
So I've now run the script through Shellcheck.net per the suggestion of Benjamin W. It doesn't throw any errors according to the site, though I still get the same errors when I try running the command.
EDIT 2*
No errors now, but the script does nothing when executed.
Assignments are evaluated only once, and not whenever the variable being assigned to is used, which I think is what your script assumes.
You could use a loop like this:
for f in /path/to/all/the/movie/files/*; do
mkdir "${f%.*}"
mv "$f" "${f%.*}"
done
This uses parameter expansion instead of cut to get rid of the file extension.

In bash i'm building an update script, how to update the updater script

I am building a little script to update application files on a raspberry pi.
It will do the following:
Download a zip file of the application files
Unzip them
Copy each one to the right place and make it executable etc as needed.
The problem i'm having is that one of the files is updatescript.sh.
I've read that it is dangerous to update / change a bash script while it is executing. See Edit shell script while it's running
Is there a good way to achieve what I'm trying to do?
What you've read is badly overblown.
It's completely safe to overwrite a shell script in-place by mving a different file over it. When you do this, the old file handle is still valid, referring to the original unmodified file contents. What you can't safely do is edit the existing file in-place.
So, the below is fine (and is what all your OS-vendor update tools like RPM do in effect):
#!/usr/bin/env bash
tempfile=$(mktemp "$BASH_SOURCE".XXXXXX)
if curl https://example.com/whatever >"$tempfile" &&
curl https://example.com/whatever.sig >"$tempfile.sig" &&
gpgv "$tempfile.sig" "$tempfile"; then
chown --reference="$BASH_SOURCE" -- "$tempfile"
chmod --reference="$BASH_SOURCE" -- "$tempfile"
sync # force your filesystem to fully flush file contents to disk
mv -- "$tempfile" "$BASH_SOURCE" && rm -f -- "$tempfile.sig"
else
rm -f -- "$tempfile" "$tempfile.sig"
exit 1
fi
...whereas this is risky:
curl https://example.com/whatever >/usr/local/bin/whatever
So do the first, thing, not the second one: When downloading a new version of your script, write that to a different file, and only rename it over the original when the download succeeded. That's what you want to do anyhow to ensure atomicity.
(There are also some demonstrations of code-signing validation practices above because, well, you need them when building an updater. You wouldn't be trying to distribute code via an automated download without verifying a signature, right? Because that's how one simple breakin to your web server results in every single one of your customers being 0wned. The above expects the public side of your code-signing keys to be in ~/.gnupg/trustedkeys.gpg, but you can put trustedkeys.gpg in any directory and point to it with the environment variable GNUPGHOME).
Even if you don't write your update code safely, the risk is still trivial to mitigate. If you move the body of your script into a function, such that it has to be completely read before any part of it can be executed, then there's no part of the file that isn't already read at the time when execution begins.
#!/usr/bin/env bash
main() {
echo "Logic all goes here"
}; { main "$#"; exit; }
Because { main "$#"; exit; } is part of a compound command, the parser reads the exit before it starts executing the main, so it's guaranteed that no further source-file content will be read after main exits, even if some future bash release didn't handle input line-by-line in the first place.
Basically do something along:
shouldbe="/tmp/$(basename "$0")"
if [ "$0" != "$shouldbe" ]; then
cp "$0" "$shouldbe"
exec env REALPATH="$0" "$shouldbe" "$#"
fi
Check if you are running from a temporary directory
If you are not, copy yourself and rerun from the temporary directory
You can even pass some variables/state along, by using environmental variables or arguments. Then you can update yourself by using simple cp, as the old path isn't sourced (or even opened) anymore.
cp "new_script_version.sh" "$REALPATH"
The script simply looks like this:
#!/bin/bash
# we need to be run from /tmp directory
shouldbe="/tmp/$(basename "$0")"
if [ "$0" != "$shouldbe" ]; then
cp "$0" "$shouldbe"
exec env REALPATH="$0" "$shouldbe" "$#"
fi
echo "Updatting...."
echo "downloading zip files"
echo "unziping zip files..."
echo "Copying each zip files etc."
cp directory"new_updatescript.sh "$REALPATH"
echo "Update succedded"
Live/test version available at tutorialspoint.
One would also implement some flock locking to the scripts just in case.

Bash script to run chmod and skip wile with badstr

I apologize for my question but I am a beginner and I am starting to code and learn and I have no clue what am I doing but still I am learning. I took community course and struggled with my homework. 2 of the 3 assignments I've done and I'm struggling with no 3.
Assignment is:
"write BASH script to run CHMOD 644 command on file /folder/file1, /folder/file2 up to file /folder/file28 and skip all files containing string badstr. I have no clue how to do it, I am searching and reading all morning and still didn't figure it out. Can someone please help me?
Go step by step:
Create a loop over the files (which means that you have to generate that list of files) -> for
Check if every file being processed contains the forbidden string -> grep
Perform the chmod if the file passes the test -> if, test, chmod
So based on your answer i got this now. Hopefully i am on right track
#!/bin/bash
FILES=/path/to/*
for f in $FILES
do
echo "Processing $f file..."
cat $f
while read -r str
do
echo "$str"
grep "$str" /path/to/other/files
done < inputfile
chmod g+w `cat inputfile

Insert delay between lines of bash

I have a very simple renaming script I'm running in OSX Terminal. It looks like this:
mv -nv /Volumes/COMMON-LIC-PHOTO/DATA/James/Rename_Test/1140122_alternate1.tif /Volumes/COMMON-LIC-PHOTO/DATA/James/Rename_Test/1140122_alternate1A.tif
I usually have several hundred lines of rename code like this one for all the files I have to rename.
However I think the network security at work is messing with the code because it will randomly jack up the file names. I think it's interrupting the code, the code is so simple I can't think of another reason why it wouldn't work.
I want to try adding a 1sec delay between each line, but how? I've read that something like sleep 1s might work but do I have to add that between every single line? That's going to be a headache if that's the case. If it is, is there another way?
UPDATE: I have a delay working but still getting the same problems as before. This is what Terminal returns:
mv -nv /Volumes/COMMON-LIC-PHOTO/DATA/James/Rename_Test/1247136_alternate1.tif /Volumes/COMMON-LIC-PHOTO/DATA/James/Rename_Test/1247136_alternate1A.tif
mv -nv /Volumes/COMMON-LIC-PHOTO/DATA/James/Rename_Test/1247136_alternate2.tif /Volumes/COMMON-LIC-PHOTO/DATA/James/Rename_Test/1247136_alternate2A.tif
mv -nv /Volumes/COMMON-LIC-PHOTO/DATA/James/Rename_Test/1247136_alternate3.tif /Volumes/COMMON-LIC-PHOTO/DATA/James/Remv -nvest/1247136_alternate3A.tif
mv -nv /Volumes/COMMON-LIC-PHOTO/DATA/James/Rename_Test/1247136_alternate4.tif /Volumes/COMMON-LIC-PHOTO/DATA/James/Remv -nTest/1247136_alternate4A.tif
mv -nv /Volumes/COMMON-LIC-PHOTO/DATA/James/Rename_Test/1247136_lifestyle.tif /Volumes/COMMON-LIC-PHOTO/DATA/James/Renmv -nv /Volume36_lifestyleA.tif
mv -nv /Volumes/COMMON-LIC-PHOTO/DATA/James/Rename_Test/1247136_standard.tif /Volumes/COMMON-LIC-PHOTO/DATA/James/Rename_Test/1247136_standardA.tifç^C^C^C^C^C
It's throwing up all kinds of junk in the rename part. It's messing with the file names and the directory names and I can't figure out why.
If you are planing to perform all those mv commands from terminal you can make a bash alias:
alias mvd='sleep 2s && mv'
In terms of a script, since scripts do not understand bash alias (at least easily) you can built a similar function in the beginning of your script:
function mvd { sleep 2s && mv "$#"; }
The only thing you need to do is to use the new mvd command instead of mv.
Tip: In case of alias you can also name your alias mv (same name as the command).
If you already have a script that has hardcoded paths (eg, the script looks like:
mv -nv /path1 /path2
mv -nv /path3 /path4
...
Then probably the simplest thing to do would be to define a function at the top of the script by adding:
mv() { command mv "$#"; sleep 1; }
the following script just reads in your file of commands and inserts a sleep after each command
while read curr_line; do
echo curr_line $curr_line
return_msg=$( $curr_line ) # execute cmd
# may want to do error checking on value of error variable $? and return_msg
sleep 1
done < ./input_file_of_original_cmds.txt # read in that file

Directory name created with a dot ,using shell script

I am using Cygwin Terminal to run shell to execute shell scripts of my Windows 7 system.
I am creating directory , but it is getting created with a dot in name.
test.sh
#!/bin/bash
echo "Hello World"
temp=$(date '+%d%m%Y')
dirName="Test_$temp"
dirPath=/cygdrive/c/MyFolder/"$dirName"
echo "$dirName"
echo "$dirPath"
mkdir -m 777 $dirPath
on executing sh test.sh its creating folder as Test_26062015 while expectation is Test_26062015.Why are these 3 special charterers coming , how can I correct it
Double quote the $dirPath in the last command and add -p to ignore mkdir failures when the directory already exists: mkdir -m 777 -p "$dirPath". Besides this, take care when combining variables and strings: dirName="Test_${temp}" looks better than dirName="Test_$temp".
Also, use this for static analysis of your scripts.
UPDATE: By analyzing the debug output of sh -x, the issue appeared due to DOS-style line-endings in the OP's script. Converting the file to UNIX format solved the problem.

Resources