I'm writing code to make some wget and get the backup file in some routers.
My first problem is with the variable RT. If I declare only IP on this variable (and add the /config.dat or anything else in wget line) this script will work, otherwise it will say that the "directory was not found."
How can I declare it as it is in the script below?
My second question: I want to say that the output file -O is the IP with the extension .dat for the 2 first IP and for the third it will not have extension just the IP.
Is it possible to do that as I said?
#!/bin/sh
RT="10.0.0.59/config.dat 10.0.0.60/cgi-bin/export_settings.cgi 10.0.0.66/rom-0"
MT="10.0.0.57 10.0.0.58"
L_RT="LOGIN"
P_RT="PASSWORD"
#FUTURE USE WITH TAR
#tmp=$"(mktemp -d)"
#trap -- 'rm -frv -- "$tmp"' EXIT
#cd -- "$tmp"
for bkp_rt in $RT; do
wget --auth-no-challenge --user=$L_RT --password=P_RT $bkp_rt -O $bkp_rt
done
After updating my script.
#!/bin/bash
RT="10.0.0.59/config.dat 10.0.0.60/cgi-bin/export_settings.cgi 10.0.0.66/rom-0"
MT="10.0.0.57 10.0.0.58"
L_RT="admin"
P_RT="PASSWORD"
L=MT="backup"
#tmp=$"(mktemp -d)"
#trap -- 'rm -frv -- "$tmp"' EXIT
#cd -- "$tmp"
for bkp_rt in $RT; do
wget --auth-no-challenge --user=$L_RT --password=$P_RT \
"$bkp_rt" \
-O "$bkp_rt"
done
You're missing the $ for --password=P_RT.
If you want to use 10.0.0.60/cgi-bin/export_settings.cgi as a filename, you'll have to make sure the 10.0.0.60 and 10.0.0.60/cgi-bin directories exist first.
To fetch the URL with wget, you will probably need to specify the scheme:
for bkp_rt in $RT; do
mkdir -p "$(dirname "$bkp_rt")"
wget --auth-no-challenge --user="$L_RT" --password="$P_RT" "$bkp_rt" \
-O "$bkp_rt"
done
Related
I want to copy the functionality of a windows program called files2folder, which basically lets you right-click a bunch of files and send them to their own individual folders.
So
1.mkv 2.png 3.doc
gets put into directories called
1 2 3
I have got it to work using this script but it throws out errors sometimes while still accomplishing what I want
#!/bin/bash
ls > list.txt
sed -i '/list.txt/d' ./list.txt
sed 's/.$//;s/.$//;s/.$//;s/.$//' ./list.txt > list2.txt
for i in $(cat list2.txt); do
mkdir $i
mv $i.* ./$i
done
rm *.txt
is there a better way of doing this? Thanks
EDIT: My script failed with real world filenames as they contained more than one . so I had to use a different sed command which makes it work. this is an example filename I'm working with
Captain.America.The.First.Avenger.2011.INTERNAL.2160p.UHD.BluRay.X265-IAMABLE
I guess you are getting errors on . and .. so change your call to ls to:
ls -A > list.txt
-A List all entries except for . and ... Always set for the super-user.
You don't have to create a file to achieve the same result, just assign the output of your ls command to a variable. Doing something like this:
files=`ls -A`
for file in $files; do
echo $file
done
You can also check if the resource is a file or directory like this:
files=`ls -A`
for res in $files; do
if [[ -d $res ]];
then
echo "$res is a folder"
fi
done
This script will do what you ask for:
files2folder:
#!/usr/bin/env sh
for file; do
dir="${file%.*}"
{ ! [ -f "$file" ] || [ "$file" = "$dir" ]; } && continue
echo mkdir -p -- "$dir"
echo mv -n -- "$file" "$dir/"
done
Example directory/files structure:
ls -1 dir/*.jar
dir/paper-279.jar
dir/paper.jar
Running the script above:
chmod +x ./files2folder
./files2folder dir/*.jar
Output:
mkdir -p -- dir/paper-279
mv -n -- dir/paper-279.jar dir/paper-279/
mkdir -p -- dir/paper
mv -n -- dir/paper.jar dir/paper/
To make it actually create the directories and move the files, remove all echo
The script is located here: https://github.com/docker-library/ghost/blob/master/docker-entrypoint.sh
#!/bin/bash
set -e
if [[ "$*" == npm*start* ]]; then
baseDir="$GHOST_SOURCE/content"
for dir in "$baseDir"/*/ "$baseDir"/themes/*/; do
targetDir="$GHOST_CONTENT/${dir#$baseDir/}"
mkdir -p "$targetDir"
if [ -z "$(ls -A "$targetDir")" ]; then
tar -c --one-file-system -C "$dir" . | tar xC "$targetDir"
fi
done
if [ ! -e "$GHOST_CONTENT/config.js" ]; then
sed -r '
s/127\.0\.0\.1/0.0.0.0/g;
s!path.join\(__dirname, (.)/content!path.join(process.env.GHOST_CONTENT, \1!g;
' "$GHOST_SOURCE/config.example.js" > "$GHOST_CONTENT/config.js"
fi
ln -sf "$GHOST_CONTENT/config.js" "$GHOST_SOURCE/config.js"
chown -R user "$GHOST_CONTENT"
set -- gosu user "$#"
fi
exec "$#"
From what I know, it says that if you use some variation of npm start to move some files around from $GHOST_SOURCE to $GHOST_CONTENT, do something to the config.js file, link the config file, set ownership of the content files, and then execute npm start as the user user. Otherwise, it just runs your commands normally.
The specifics are what are hard for me to understand because there are a lot of things from bash that I've never seen before. So I have a lot of questions.
for dir in "$baseDir"/*/ "$baseDir"/themes/*/; do
In the above, why do they specify both /*/ and /themes/*/? Shouldn't /*/ contain themes? Is * not a wildcard for some reason?
targetDir="$GHOST_CONTENT/${dir#$baseDir/}"
In the above, what is the point of # in the variable expansion?
tar -c --one-file-system -C "$dir" . | tar xC "$targetDir"
In the above, does this somehow save time? Why not use something like rsync? I understand the point of -C, but why -c and --one-file-system?
sed -r '
s/127\.0\.0\.1/0.0.0.0/g;
s!path.join\(__dirname, (.)/content!path.join(process.env.GHOST_CONTENT, \1!g;
' "$GHOST_SOURCE/config.example.js" > "$GHOST_CONTENT/config.js"
What does this sed command do? I know it's a replacement, but why the "$GHOST_SOURCE/config.example.js" > "$GHOST_CONTENT/config.js" as the end?
ln -sf "$GHOST_CONTENT/config.js" "$GHOST_SOURCE/config.js"
In the above, what is the point of this symlink? Why try to link them to each other if both files already exist?
set -- gosu user "$#"
In the above what does calling set with no args do?
I hope that's not too much. I felt making a separate question for each of these would be too much especially since it's all related to each other.
for dir in "$baseDir"/*/ "$baseDir"/themes/*/; do
In the above, why do they specify both /*/ and /themes/*/? Shouldn't
/*/ contain themes? Is * not a wildcard for some reason?
themes/ is in the first match, but themes/*/ is not, so you need the second entry to include the contents of themes.
targetDir="$GHOST_CONTENT/${dir#$baseDir/}"
In the above, what is the point of # in the variable expansion?
It removes the $baseDir prefix from $dir. So for example:
bash$ dir=/home/bmitch/data/docker
bash$ echo $dir
/home/bmitch/data/docker
bash$ echo ${dir#/home/bmitch}
/data/docker
tar -c --one-file-system -C "$dir" . | tar xC "$targetDir"
In the above, does this somehow save time? Why not use something like
rsync? I understand the point of -C, but why -c and --one-file-system?
rsync may not be installed on every machine by default, tar is fairly universal. The -c is to create, vs extract, and --one-file-system avoids tar continuing to an outside mount point (nfs, symlink to root, etc).
sed -r '
s/127\.0\.0\.1/0.0.0.0/g;
s!path.join\(__dirname, (.)/content!path.join(process.env.GHOST_CONTENT, \1!g;
' "$GHOST_SOURCE/config.example.js" > "$GHOST_CONTENT/config.js"
What does this sed command do? I know it's a replacement, but why the
"$GHOST_SOURCE/config.example.js" > "$GHOST_CONTENT/config.js" as the
end?
config.example.js is the input (last arg to the sed), config.js is the output (after the >). So it takes the config.example.js, change the ip address from 127.0.0.1 to 0.0.0.0, effectively listening on all interfaces/ip's instead of just internally on the loopback. The second half of the sed is changing the path.join arguments from __dirname to process.env.GHOST_CONTENT.
ln -sf "$GHOST_CONTENT/config.js" "$GHOST_SOURCE/config.js"
In the above, what is the point of this symlink? Why try to link them
to each other if both files already exist?
The $GHOST_SOURCE/config.js is replaced (-f) with a link to $GHOST_CONTENT/config.js. Symbolic links give a file name reference to another actual file, so there will be two names, but one copy of the data, which means you will only have a single configuration in this situation.
set -- gosu user "$#"
In the above what does calling set with no args do?
This changes the values of $1, $2, ... $n to be $1=gosu, $2=user, $3=the old $1, $4=the old $2..., essentially adding the gosu and user to the beginning of the passed parameters to the script. The -- makes sure that set doesn't interpret any values from $# as a flag for itself.
i have a little and probably very stupid problem..
I'm trying to make an alias for tar and gzip which uses the file name (given as an argument), but it is not converting as expected to the filename in the output.
My alias is:
alias targz='tar -cvzf $1.tar.gz $1'
It works, but the argument stored in $1 is not working when setting the filename, it zips it in a file called ".tar.gz".
I tried just echoing '$1.tar.gz' and the output is '.tar.gz', so, i think it should be something very stupid in the format.
Any help is welcome,
Aliases don't have positional parameters. They're basically macros (an alias gets replaced with the text of the alias when executed).
You could use a function:
targz() {
tar -cvzf "$1".tar.gz "$1"
}
or a script
#!/bin/bash
tar -cvzf "$1".tar.gz "$1"
Personally, I've been using something like the following script to achieve a similar goal (comments added for your convenience):
#!/bin/bash
#support multiple args
for arg in "$#"
do
#strip ending slash if present (tab-completion adds them)
arg=${arg%/}
#compress with pigz (faster on multicore systems)
tar c "$arg" | pigz - > "$arg".tgz
done
In case you want my complete version, I also remove the argument directory if the tarring and compression succeed (similar to what gzip does for individual files)
#!/bin/bash
set -o pipefail
for arg in "$#"
do
arg=${arg%/}
tar c "$arg" | pigz - > "$arg".tgz && rm -rf "$arg"
done
Update:
Credits to #mklement0 for the more succinct and more efficient stripping of ending slashes.
Use an alias to a function for it, something like:
alias targz='function targz_() { tar -cvzf "$1.tar.gz" "$1"; return 0; }; targz_ '
Try to make a script to do that, like this.
echo """
#!/bin/bash if [ -z $1 ]; then
echo "Variable is Null"
else
tar -cvzf $1.tar.gz $1
fi
""" > /usr/local/bin/targz
chmod +x /usr/local/bin/targz
How can I have the following command
echo "something" > "$f"
where $f will be something like folder/file.txt create the folder folder if does not exist?
If I can't do that, how can I have a script duplicate all folders (without contents) in directory 'a' to directory 'b'?
e.g if I have
a/f1/
a/f2/
a/f3/
I want to have
b/f1/
b/f2/
b/f3/
The other answers here are using the external command dirname. This can be done without calling an external utility.
mkdir -p "${f%/*}"
You can also check if the directory already exists, but this not really required with mkdir -p:
mydir="${f%/*}"
[[ -d $mydir ]] || mkdir -p "$mydir"
echo "something" | install -D /dev/stdin $f
try
mkdir -p `dirname $f` && echo "something" > $f
You can use mkdir -p to create the folder before writing to the file:
mkdir -p "$(dirname $f)"
I've looked around for an answer to this one but couldn't find one.
I have written a simple script that does initial server settings and I'd like it to remove/unlink itself from the root directory on completion. I've tried a number of solutions i googled ( for example /bin/rm $test.sh) but the script always seems to remain in place. Is this possible? Below is my script so far.
#! /bin/bash
cd /root/
wget -r -nH -np --cut-dirs=1 http://myhost.com/install/scripts/
rm -f index.html* *.gif */index.html* */*.gif robots.txt
ls -al /root/
if [ -d /usr/local/psa ]
then
echo plesk > /root/bin/INST_SERVER_TYPE.txt
chmod 775 /root/bin/*
/root/bin/setting_server_ve.sh
rm -rf /root/etc | rm -rf /root/bin | rm -rf /root/log | rm -rf /root/old
sed -i "75s/false/true/" /etc/permissions/jail.conf
exit 1;
elif [ -d /var/webmin ]
then
echo webmin > /root/bin/INST_SERVER_TYPE.txt
chmod 775 /root/bin/*
/root/bin/setting_server_ve.sh
rm -rf /root/etc | rm -rf /root/bin | rm -rf /root/log | rm -rf /root/old
sed -i "67s/false/true/" /etc/permissions/jail.conf
break
exit 1;
else
echo no-gui > /root/bin/INST_SERVER_TYPE.txt
chmod 775 /root/bin/*
/root/bin/setting_server_ve.sh
rm -rf /root/etc | rm -rf /root/bin | rm -rf /root/log | rm -rf /root/old
sed -i "67s/false/true/" /etc/permissions/jail.conf
break
exit 1;
fi
rm -- "$0"
Ought to do the trick. $0 is a magic variable for the full path of the executed script.
This works for me:
#!/bin/sh
rm test.sh
Maybe you didn't really mean to have the '$' in '$test.sh'?
The script can delete itself via the shred command (as a secure deletion) when it exits.
#!/bin/bash
currentscript="$0"
# Function that is called when the script exits:
function finish {
echo "Securely shredding ${currentscript}"; shred -u ${currentscript};
}
# Do your bashing here...
# When your script is finished, exit with a call to the function, "finish":
trap finish EXIT
The simplest one:
#!/path/to/rm
Usage: ./path/to/the/script/above
Note: /path/to/rm must not have blank characters at all.
I wrote a small script that adds a grace period to a self deleting script based on
user742030's answer https://stackoverflow.com/a/34303677/10772577.
function selfShred {
SHREDDING_GRACE_SECONDS=${SHREDDING_GRACE_SECONDS:-5}
if (( $SHREDDING_GRACE_SECONDS > 0 )); then
echo -e "Shreding ${0} in $SHREDDING_GRACE_SECONDS seconds \e[1;31mCTRL-C TO KEEP FILE\e[0m"
BOMB="●"
FUZE='~'
SPARK="\e[1;31m*\e[0m"
SLEEP_LEFT=$SHREDDING_GRACE_SECONDS
while (( $SLEEP_LEFT > 0 )); do
LINE="$BOMB"
for (( j=0; j < $SLEEP_LEFT - 1; j++ )); do
LINE+="$FUZE"
done
LINE+="$SPARK"
echo -en $LINE "\r"
sleep 1
(( SLEEP_LEFT-- ))
done
fi
shred -u "${0}"
}
trap selfShred EXIT
See the repo here: https://github.com/reedHam/self-shred
$0 may not contain the script's name/path in certain circumstances. Please check the following: https://stackoverflow.com/a/35006505/5113030 (Choosing between $0 and BASH_SOURCE...)
The following script should work as expected in these cases:
source script.sh - the script is sourced;
./script.sh - executed interactively;
/bin/bash -- script.sh - passed as an argument to a shell program.
#!/usr/bin/env bash
# ...
rm -- "$( readlink -f -- "${BASH_SOURCE[0]:-$0}" 2> '/dev/null'; )";
Please check the following regarding shell script source reading and execution since it may affect the behavior when a script is deleted while running: https://unix.stackexchange.com/a/121025/133353 (How Does Linux deal with shell scripts?...)
Related: https://stackoverflow.com/a/246128/5113030 (How can I get the source directory of a Bash script from...)
Just add to the end:
rm -- "$0"
Why remove the script at all? As other have mentioned it means you have to keep a copy elsewhere.
A suggestion is to use a "firstboot" like approach. Simply create an empty file in e.g. /etc/sysconfig that triggers the execution of this script if it is present. Then remove that file at the end of the script.
Modify the script so it has the necessary chkconfig headers and place it in /etc/init.d/ so it is run at every boot.
That way you can rerun the script at a later time simply by recreating the trigger script.
Hope this helps.