Bash cron job on hpanel not locating directory - bash

I have the following code on cron job, it runs but the code does not really do what it supposed to. It does not create the directory plus is does not do anything in the code. Please help check if the way I pointed to the directory is wrong.
#!/bin/bash
NAMEDATE=`date +%F_%H-%M`_`whoami`
NAMEDATE2=`date `
mkdir ~/home/u3811*****/domains/website.com/public_html/cron/backup/files/$NAMEDATE -m 0755
mysqldump -u u3811*****_boss -p"*******" u3811*****_data | gzip ~/home/u3811*****/domains/website.com/public_html/cron/backup/files/$NAMEDATE/db.sql.gz
echo "This is the database backup for website.com on $NAMEDATE2" |
mailx -a ~/home/u3811*****/domains/website.com/public_html/cron/backup/files/$NAMEDATE/db.sql.gz -s "website.com Database attached" -- mail#gmail.com
chmod -R 0644 ~/home/u3811*****/domains/website.com/public_html/cron/backup/files/$NAMEDATE/*
exit 0

Your NAMEDATE variable needs to be modified a bit, as shown below, for more information about variables in bash see this link
NAMEDATE=$(date +%F_%H-%M"_"$(whoami))
When you issue the mkdir command you will need to pass the -p option to create the complete directory structure if it doesn't exists.
mkdir -p ~/home/u3811numbers/domains/website.com/public_html/cron/backup/files/$NAMEDATE -m 0755
Also, the ~ character on Linux based distributions is used as a shortcut for the home directory of the user that invokes it so, in the line below the result is /home//home/u3811*****/domains/website.com/public_html/cron/backup/files/2020-09-04_23-13_ you can read more about it in here
In you last command before the exit, you might need to pass a wildcard (*) to avoid removing the executable bit on the directory, see below
chmod -R 0644 ~/home/u3811*****/domains/website.com/public_html/cron/backup/files/$NAMEDATE/
The final version of your script will look something like this.
#!/bin/bash
NAMEDATE=$(date +%F_%H-%M"_"$(whoami))
NAMEDATE2=date
mkdir -p ~/home/u3811******/domains/website.com/public_html/cron/backup/files/$NAMEDATE -m 0755
mysqldump -u u3811*****_boss -p"******" u3811*****_data | gzip > ~/home/u3811*****/domains/website.com/public_html/cron/backup/files/$NAMEDATE/db.sql.gz
echo "This is the database backup for website.com on $NAMEDATE2" | mailx -a ~/home/u3811*****/domains/website.com/public_html/cron/backup/files/$NAMEDATE/db.sql.gz -s "website.com Database attached" -- mail#gmail.com
chmod -R 0644 ~/home/u3811*****/domains/website.com/public_html/cron/backup/files/$NAMEDATE/*
To debug a bash script you can always pass the -x flag for more information take a look at this article

Related

How to delete contents of user home dir safely via bash

I am writing a bash script to do a account restore. The contents of the home dir is zipped up using this command.
sudo sh -c "cd /home/$username; zip -0 -FS -r -b /tmp /home/0-backup/users/$username.zip ."
This works as expected.
If the user requests a restore of their data, I am doing the following
sudo sh -c "cd /home/$username; rm -rf *"
Then
sudo -u $username unzip /home/0-backup/users/$username.zip -d /home/$username/
This works as expected.
However you can see the flaw in the delete statement, if the username is not set. We delete all users home dir. I have if statements that do the checking to make sure the username is there. I am looking for some advice on a better way to handle resetting the users account data that isn't so dangerous.
One thought I had was to delete the user account and then recreate it. Then do the restore. I think that this would be less risky. I am open to any suggestions.
Check the parameters first.
Then use && after cd so that it won't execute rm if the cd fails.
if [ -n "$username" ] && [ -d "/home/$username" ]
then
sudo sh -c "cd '/home/$username' && rm -rf * .[^.]*"
fi
I added .[^.]* in the rm command so it will delete dot-files as well. [^.] is needed to prevent it from deleting . (the user's directory) and .. (the /home directory).

Docker container unable to ignore the EntryPoint bash script failure

Bash script:
clonePath=/data/config/
git branch -r | fgrep -v 'origin/HEAD' | sed 's| origin/|git checkout |' > checkoutAllBranches.sh
chmod +x checkoutAllBranches.sh
echo "Fetch branch: `cat checkoutAllBranches.sh`"
./checkoutAllBranches.sh
git checkout master
git remote rm origin
rm checkoutAllBranches.sh
for config_dir in `ls -a`; do
cp -r $config_dir $clonePath/;
done
echo "API Config update complete..."
Dockerfile which issues this script execution
ENTRYPOINT ["sh","config-update-force.sh","|| true"]
The error below causes the container startup failure despite setting the command status to 0 manually using || true
ERROR:
Error:
cp: cannot create regular file '/data/./.git/objects/pack/pack-27a9d...fb5e368e4cf.pack': Permission denied
cp: cannot create regular file '/data/./.git/objects/pack/pack-27a9d...fbae25e368e4cf.idx': Permission denied
I am looking for 2 options here:
Change these file permissions and then store them in the remote with rwx permissions
Do something to the docker file to ignore this script failure error and start the container.
DOCKERFILE:
FROM docker.hub.com/java11-temurin:latest
USER root
RUN apt-get update
RUN apt-get -y upgrade
RUN apt-get install -y rsync telnet vim wget git
RUN mkdir -p /opt/config/clone/data
RUN chown -R 1001:1001 /opt/config
USER 1001
ADD build/libs/my-api-config-server.jar .
ADD config-update-force.sh .
USER root
RUN chmod +x config-update-force.sh
USER 1001
EXPOSE 8080
CMD java $BASE_JAVA_OPTS $JAVA_OPTS -jar my-api-config-server.jar
ENTRYPOINT ["sh","config-update-force.sh","|| true"]
BASH SCRIPT:
#!/bin/bash
set +e
set +x
clonePath=/opt/clone/data/data
#source Optumfile.properties
echo "properties loaded: example ${git_host}"
if [ -d my-api-config ]; then
rm -rf my-api-config;
echo "existing my-api-config dir deleted..."
fi
git_url=https://github.com/my-api-config-server
git clone https://github.com/my-api-config-server
cd my-api-config-server
git branch -r | fgrep -v 'origin/HEAD' | sed 's| origin/|git checkout |' > checkoutAllBranches.sh
chmod +x checkoutAllBranches.sh
echo "Fetch branch: `cat checkoutAllBranches.sh`"
./checkoutAllBranches.sh
git checkout master
git remote rm origin
rm checkoutAllBranches.sh
for config_dir in `ls -a`; do
cp -r $config_dir $clonePath/;
done
echo "My API Config update complete..."
When you do in the script...
chmod +x checkoutAllBranches.sh
...than why not before cp
chmod -R +rwx ${clonePath}
...or if the stderr message 'wont impact anything'...
cp -r $config_dir $clonePath/ 2>/dev/null;
...even cp dont copy -verbosly.
?
When your Dockerfile declares an ENTRYPOINT, that command is the only thing the container does. If it also declares a CMD, the CMD is passed as additional arguments to the ENTRYPOINT; it is not run on its own unless the ENTRYPOINT makes sure to execute it.
Shell errors are not normally fatal, and especially if you explicitly set +e, even if a shell command fails the shell script will keep running. You see this in your output where you get multiple cp errors; the first error does not terminate the script.
You need to do two things here. The first is to set the ENTRYPOINT to actually run the CMD; the simplest and most common way to do this is to end the script with
exec "$#"
The second is to remove the || true from the Dockerfile. As you have it written out currently, this is passed as the first argument to the entrypoint wrapper – it is not run through a shell and it is not interpreted as a "or" operator. If your script begins with a "shebang" line and is marked executable (both of these are correct in the question) the you do not explicitly need the sh interpreter.
# must be a JSON array; no additional "|| true" argument; no sh -c wrapper
ENTRYPOINT ["./config-update-force.sh"]
# any valid CMD will work with `exec "$#"
CMD java $BASE_JAVA_OPTS $JAVA_OPTS -jar my-api-config-server.jar

Adding shell if statement inside lftp

I'm trying to use SFTP to copy some files from one server to another, this task should run everyweek. The script I use :
HOST='sftp://my.server.com'
USER='user1'
PASSWORD='passwd'
DIR=$HOSTNAME
REMOTE_DIR='/home/remote'
LOCAL_DIR='/home/local'
# LFTP via SFTP connexion
lftp -u "$USER","$PASSWORD" $HOST <<EOF
# changing directory
cd "$REMOTE_DIR"
$(if [ ! -d "$DIR" ]; then
mkdir $DIR
fi)
put -O "$REMOTE_DIR"/$DIR "$LOCAL_DIR"/uploaded.txt
EOF
My issue is that put is executed without taking in consideration the result of if statment.
PS : The error message I got is the following :
put: Access failed: No such file (/home/backups/myhost/upload.txt)
LFTP has no if statement!
What you are doing here?
lftp -u "$USER","$PASSWORD" $HOST <<EOF
cd "$REMOTE_DIR"
$(if [ ! -d "$DIR" ]; then
mkdir $DIR
fi)
put -O "$REMOTE_DIR"/$DIR "$LOCAL_DIR"/uploaded.txt
EOF
You call a sub command in a here document. The sub command is executed locally before lftp is started and its output is pasted in the here document, which gets passed to lftp. This works just, because mkdir has no output. You do not call mkdir on the ftp server. You call the mkdir of your local shell. Effectively it is the same as if you put the if statement before the lftp execution.
if [ ! -d "$DIR" ]; then
mkdir $DIR
fi
lftp -u "$USER","$PASSWORD" $HOST <<EOF
cd "$REMOTE_DIR"
put -O "$REMOTE_DIR"/$DIR "$LOCAL_DIR"/uploaded.txt
EOF
What you are trying to do, does not work. You have to think about a different solution.
Right now I have no FTP server to test it, but it might be possible to use the -f option of LFTP's mkdir. I assume that it may work like the -f option of the Unix rm command. Try this:
lftp -u "$USER","$PASSWORD" $HOST <<EOF
cd "$REMOTE_DIR"
mkdir -f "$DIR"
put -O "$REMOTE_DIR"/$DIR "$LOCAL_DIR"/uploaded.txt
EOF
Update: It works as supposed. The creation of a directory, which exist already, throws no error, if you use the option -f:
lftp anonymous#localhost:/pub> mkdir -f dir
mkdir ok, `dir' created
lftp anonymous#localhost:/pub> mkdir -f dir
lftp anonymous#localhost:/pub> ls
drwx------ 2 116 122 4096 Aug 10 12:04 dir
Maybe you lftp client is outdated. I tested it with Debian 9.

mkdir always creates a file instead a directory

First I want to say that I don't really know what I should look for, here in Stack Overflow and what could be a good query for my problem.
In simple words I want to create a new directory and than do some file operations in it. But with the script that I have crafted I got always a file instead of a directory. It seems to be absolutely regardless how I stick the code together there is always the same result. I hope tat masses can help me with their knowledge.
Here is the script:
#!/bin/bash
DLURL=http://drubuntu.googlecode.com/git'
d7dir=/var/www/d7/'
dfsettings=/var/www/d7/sites/default/default.settings.php
settings=/var/www/d7/sites/default/settings.php
#settiing up drush
drush -y dl drush --destination=/usr/share;
#Download and set up drupal
cd /var/www/;
drush -y dl drupal;
mkdir "$d7dir"; #this is the line that always produces a file instead a directory
# regardless if it is replaced by the variable or entered as
# /var/www/d7
cd /var/www/drup*;
cp .htaccess .gitignore "$d7dir";
cp -r * "$d7dir";
cd "$d7dir";
rm -r /var/www/drup*;
mkdir "$d7dir"sites/default/files;
chmod 777 "$d7dir"sites/default/files;
cp "$dfsettings" "$settings";
chmod 777 "$settings";
chown $username:www-data /var/www/d7/.htaccess;
wget -O $d7dir"setupsite $DLURL/scripts/setupsite.sh; > /dev/null 2>&1
chmod +x /var/www/setupsite;
echo "Login Details following...";
read -sn 1 -p "Press any key to continue...";
bash "$d7dir"setupsite;
chown -Rh $username:www-data /var/www;
chmod 644 $d7dir".htaccess;
chmod 644"$settings";
chmod 644"$dfsettings";
exit
I hope someone got the reason for that.
There are many way to debug a shell-scripting.
Add set -x in your beginning script
Get the return value.
mkdir 'the-directory'
ret=$?
if test $ret -eq 0; then
echo 'Create success.'
else
echo 'Failed to create.'
fi
Set to verbose mode $ mkdir -v 'the-directory'
Try this command $ type mkdir, to checking mkdir command.

Bash script stops execution in the middle of the script without any error

I have this simple bash script that gets a copy from my dev server:
#!/bin/sh
DATE=`date +%Y-%m-%d_%H%M.%S`
BASEDIR="/var/www/db"
RELEASEDIR="$DATE";
RELEASEDIRFULL="$BASEDIR/releases/$RELEASEDIR"
mkdir -p "$RELEASEDIRFULL"
echo "Chdir to \"$RELEASEDIRFULL\""
cd "$RELEASEDIRFULL"
echo "Getting copy from dev"
ssh dev.example.tld "cd /tmp; cd /sites/db; tar -zcvp --exclude data --exclude scripts -f - *" | tar zxvpf -
ln -s /var/www/db/data data
ln -s /var/www/db/scripts scripts
cd $BASEDIR
rm htdocs; ln -s releases/$RELEASEDIR htdocs
Recently it stopped working properly with no apparent reason. It gets to the ssh line, executes it fine (files appear on live server) but does not proceed with ln commands. If I comment the ssh line out, ln lines will get executed properly.
UPDATE: I noticed that when I'm logged on as www-data and start the script, it completes as expected, without errors.
No time to check up the man page, but looks like your tar input is - * - all files + stdin? Are you meaning -- for suspension of further argument processing (if tar supports that)

Resources