mkdir always creates a file instead a directory - bash

First I want to say that I don't really know what I should look for, here in Stack Overflow and what could be a good query for my problem.
In simple words I want to create a new directory and than do some file operations in it. But with the script that I have crafted I got always a file instead of a directory. It seems to be absolutely regardless how I stick the code together there is always the same result. I hope tat masses can help me with their knowledge.
Here is the script:
#!/bin/bash
DLURL=http://drubuntu.googlecode.com/git'
d7dir=/var/www/d7/'
dfsettings=/var/www/d7/sites/default/default.settings.php
settings=/var/www/d7/sites/default/settings.php
#settiing up drush
drush -y dl drush --destination=/usr/share;
#Download and set up drupal
cd /var/www/;
drush -y dl drupal;
mkdir "$d7dir"; #this is the line that always produces a file instead a directory
# regardless if it is replaced by the variable or entered as
# /var/www/d7
cd /var/www/drup*;
cp .htaccess .gitignore "$d7dir";
cp -r * "$d7dir";
cd "$d7dir";
rm -r /var/www/drup*;
mkdir "$d7dir"sites/default/files;
chmod 777 "$d7dir"sites/default/files;
cp "$dfsettings" "$settings";
chmod 777 "$settings";
chown $username:www-data /var/www/d7/.htaccess;
wget -O $d7dir"setupsite $DLURL/scripts/setupsite.sh; > /dev/null 2>&1
chmod +x /var/www/setupsite;
echo "Login Details following...";
read -sn 1 -p "Press any key to continue...";
bash "$d7dir"setupsite;
chown -Rh $username:www-data /var/www;
chmod 644 $d7dir".htaccess;
chmod 644"$settings";
chmod 644"$dfsettings";
exit
I hope someone got the reason for that.

There are many way to debug a shell-scripting.
Add set -x in your beginning script
Get the return value.
mkdir 'the-directory'
ret=$?
if test $ret -eq 0; then
echo 'Create success.'
else
echo 'Failed to create.'
fi
Set to verbose mode $ mkdir -v 'the-directory'
Try this command $ type mkdir, to checking mkdir command.

Related

How to delete contents of user home dir safely via bash

I am writing a bash script to do a account restore. The contents of the home dir is zipped up using this command.
sudo sh -c "cd /home/$username; zip -0 -FS -r -b /tmp /home/0-backup/users/$username.zip ."
This works as expected.
If the user requests a restore of their data, I am doing the following
sudo sh -c "cd /home/$username; rm -rf *"
Then
sudo -u $username unzip /home/0-backup/users/$username.zip -d /home/$username/
This works as expected.
However you can see the flaw in the delete statement, if the username is not set. We delete all users home dir. I have if statements that do the checking to make sure the username is there. I am looking for some advice on a better way to handle resetting the users account data that isn't so dangerous.
One thought I had was to delete the user account and then recreate it. Then do the restore. I think that this would be less risky. I am open to any suggestions.
Check the parameters first.
Then use && after cd so that it won't execute rm if the cd fails.
if [ -n "$username" ] && [ -d "/home/$username" ]
then
sudo sh -c "cd '/home/$username' && rm -rf * .[^.]*"
fi
I added .[^.]* in the rm command so it will delete dot-files as well. [^.] is needed to prevent it from deleting . (the user's directory) and .. (the /home directory).

Docker container unable to ignore the EntryPoint bash script failure

Bash script:
clonePath=/data/config/
git branch -r | fgrep -v 'origin/HEAD' | sed 's| origin/|git checkout |' > checkoutAllBranches.sh
chmod +x checkoutAllBranches.sh
echo "Fetch branch: `cat checkoutAllBranches.sh`"
./checkoutAllBranches.sh
git checkout master
git remote rm origin
rm checkoutAllBranches.sh
for config_dir in `ls -a`; do
cp -r $config_dir $clonePath/;
done
echo "API Config update complete..."
Dockerfile which issues this script execution
ENTRYPOINT ["sh","config-update-force.sh","|| true"]
The error below causes the container startup failure despite setting the command status to 0 manually using || true
ERROR:
Error:
cp: cannot create regular file '/data/./.git/objects/pack/pack-27a9d...fb5e368e4cf.pack': Permission denied
cp: cannot create regular file '/data/./.git/objects/pack/pack-27a9d...fbae25e368e4cf.idx': Permission denied
I am looking for 2 options here:
Change these file permissions and then store them in the remote with rwx permissions
Do something to the docker file to ignore this script failure error and start the container.
DOCKERFILE:
FROM docker.hub.com/java11-temurin:latest
USER root
RUN apt-get update
RUN apt-get -y upgrade
RUN apt-get install -y rsync telnet vim wget git
RUN mkdir -p /opt/config/clone/data
RUN chown -R 1001:1001 /opt/config
USER 1001
ADD build/libs/my-api-config-server.jar .
ADD config-update-force.sh .
USER root
RUN chmod +x config-update-force.sh
USER 1001
EXPOSE 8080
CMD java $BASE_JAVA_OPTS $JAVA_OPTS -jar my-api-config-server.jar
ENTRYPOINT ["sh","config-update-force.sh","|| true"]
BASH SCRIPT:
#!/bin/bash
set +e
set +x
clonePath=/opt/clone/data/data
#source Optumfile.properties
echo "properties loaded: example ${git_host}"
if [ -d my-api-config ]; then
rm -rf my-api-config;
echo "existing my-api-config dir deleted..."
fi
git_url=https://github.com/my-api-config-server
git clone https://github.com/my-api-config-server
cd my-api-config-server
git branch -r | fgrep -v 'origin/HEAD' | sed 's| origin/|git checkout |' > checkoutAllBranches.sh
chmod +x checkoutAllBranches.sh
echo "Fetch branch: `cat checkoutAllBranches.sh`"
./checkoutAllBranches.sh
git checkout master
git remote rm origin
rm checkoutAllBranches.sh
for config_dir in `ls -a`; do
cp -r $config_dir $clonePath/;
done
echo "My API Config update complete..."
When you do in the script...
chmod +x checkoutAllBranches.sh
...than why not before cp
chmod -R +rwx ${clonePath}
...or if the stderr message 'wont impact anything'...
cp -r $config_dir $clonePath/ 2>/dev/null;
...even cp dont copy -verbosly.
?
When your Dockerfile declares an ENTRYPOINT, that command is the only thing the container does. If it also declares a CMD, the CMD is passed as additional arguments to the ENTRYPOINT; it is not run on its own unless the ENTRYPOINT makes sure to execute it.
Shell errors are not normally fatal, and especially if you explicitly set +e, even if a shell command fails the shell script will keep running. You see this in your output where you get multiple cp errors; the first error does not terminate the script.
You need to do two things here. The first is to set the ENTRYPOINT to actually run the CMD; the simplest and most common way to do this is to end the script with
exec "$#"
The second is to remove the || true from the Dockerfile. As you have it written out currently, this is passed as the first argument to the entrypoint wrapper – it is not run through a shell and it is not interpreted as a "or" operator. If your script begins with a "shebang" line and is marked executable (both of these are correct in the question) the you do not explicitly need the sh interpreter.
# must be a JSON array; no additional "|| true" argument; no sh -c wrapper
ENTRYPOINT ["./config-update-force.sh"]
# any valid CMD will work with `exec "$#"
CMD java $BASE_JAVA_OPTS $JAVA_OPTS -jar my-api-config-server.jar

chmod +x cant find build.sh file

I have been trying to make a VCS in C++ but the build file is not running in my LINUX(Ubuntu).
It is prompting the above message.
my build file is as follows:
#!/bin/bash
sudo apt-get update
udo apt-get install openssl -y
sudo apt-get install libssl-dev -y
mkdir -p ~/imperium/bin
cp imperium.sh ~/imperium
cd ..
make
cd ~/imperium/bin || echo "error"
chmod +x main
cd ..
if grep -q "source $PWD/imperium.sh" "$PWD/../.bashrc" ; then
echo 'already installed bash source';
else
echo "source $PWD/imperium.sh" >> ~/.bashrc;
fi
my imperium.sh file is also as follows:
function imperium(){
DIR=$PWD
export dir=$DIR
cd ~/imperium/bin || echo "Error"
./main "$#"
cd "$DIR" || echo "Error"
}
I will be heavily obliged if any one can solve this problem of mine. After chmod I have been doing:
./build.sh but its prompting that build.sh file does not exists.
For me it seems you have a typo right in the 3rd row "udo" ->
"sudo".
Also, You should avoid using cd .. and use relative paths for
the commands.

Adding shell if statement inside lftp

I'm trying to use SFTP to copy some files from one server to another, this task should run everyweek. The script I use :
HOST='sftp://my.server.com'
USER='user1'
PASSWORD='passwd'
DIR=$HOSTNAME
REMOTE_DIR='/home/remote'
LOCAL_DIR='/home/local'
# LFTP via SFTP connexion
lftp -u "$USER","$PASSWORD" $HOST <<EOF
# changing directory
cd "$REMOTE_DIR"
$(if [ ! -d "$DIR" ]; then
mkdir $DIR
fi)
put -O "$REMOTE_DIR"/$DIR "$LOCAL_DIR"/uploaded.txt
EOF
My issue is that put is executed without taking in consideration the result of if statment.
PS : The error message I got is the following :
put: Access failed: No such file (/home/backups/myhost/upload.txt)
LFTP has no if statement!
What you are doing here?
lftp -u "$USER","$PASSWORD" $HOST <<EOF
cd "$REMOTE_DIR"
$(if [ ! -d "$DIR" ]; then
mkdir $DIR
fi)
put -O "$REMOTE_DIR"/$DIR "$LOCAL_DIR"/uploaded.txt
EOF
You call a sub command in a here document. The sub command is executed locally before lftp is started and its output is pasted in the here document, which gets passed to lftp. This works just, because mkdir has no output. You do not call mkdir on the ftp server. You call the mkdir of your local shell. Effectively it is the same as if you put the if statement before the lftp execution.
if [ ! -d "$DIR" ]; then
mkdir $DIR
fi
lftp -u "$USER","$PASSWORD" $HOST <<EOF
cd "$REMOTE_DIR"
put -O "$REMOTE_DIR"/$DIR "$LOCAL_DIR"/uploaded.txt
EOF
What you are trying to do, does not work. You have to think about a different solution.
Right now I have no FTP server to test it, but it might be possible to use the -f option of LFTP's mkdir. I assume that it may work like the -f option of the Unix rm command. Try this:
lftp -u "$USER","$PASSWORD" $HOST <<EOF
cd "$REMOTE_DIR"
mkdir -f "$DIR"
put -O "$REMOTE_DIR"/$DIR "$LOCAL_DIR"/uploaded.txt
EOF
Update: It works as supposed. The creation of a directory, which exist already, throws no error, if you use the option -f:
lftp anonymous#localhost:/pub> mkdir -f dir
mkdir ok, `dir' created
lftp anonymous#localhost:/pub> mkdir -f dir
lftp anonymous#localhost:/pub> ls
drwx------ 2 116 122 4096 Aug 10 12:04 dir
Maybe you lftp client is outdated. I tested it with Debian 9.

File permissions, root bash script, edit by user

I have a script that needs to be ran as root. In this script I create directories and files. The files and directories cannot be modified by the user who ran the script (unless there root of course).
I have tried several solutions found here and other sites, first I tried to mkdir -m 777 the directories as so:
#!/bin/bash
...
#Check execution location
CDIR=$(pwd)
#File setup
DATE=$(date +"%m-%d_%H:%M:%S")
LFIL="$CDIR/android-tools/logcat/logcat_$DATE.txt"
BFIL="$CDIR/android-tools/backup/backup_$DATE"
mkdir -m 777 -p "$CDIR/android-tools/logcat/"
mkdir -m 777 -p "$CDIR/android-tools/backup/"
...
I have also tried touching every created file and directory with the $USER as root, like so:
#!/bin/bash
...
#Check execution location
CDIR=$(pwd)
#File setup
DATE=$(date +"%m-%d_%H:%M:%S")
LFIL="$CDIR/android-tools/logcat/logcat_$DATE.txt"
BFIL="$CDIR/android-tools/backup/backup_$DATE"
mkdir -p "$CDIR/android-tools/logcat/"
mkdir -p "$CDIR/android-tools/backup/"
sudo -u $USER touch "$CDIR/"
sudo -u $USER touch "$CDIR/android-tools/"
sudo -u $USER touch "$CDIR/android-tools/logcat/"
sudo -u $USER touch "$CDIR/android-tools/backup/"
sudo -u $USER touch "$CDIR/android-tools/logcat/logcat_*.txt"
sudo -u $USER touch "$CDIR/android-tools/logcat/Backup_*"
...
I have also tried manually running sudo chmod 777 /android-tools/*, and sudo chmod 777 /* from the script directory, gave no errors, but I still cannot delete the files without root permission.
Heres the full script, It's not done yet. Don't run it with an android device connected to your computer.
http://pastebin.com/F20rLJQ4
touch doesn't change ownership. I think you want chown.
If you're using sudo to run your script, $USER is root, but $SUDO_USER is the user who ran sudo, so you can use that.
If you're not using sudo, you can't trust $USER to be anything in particular. The caller can set it to anything (like "root cat /etc/shadow", which would make your above script do surprising things you didn't want it to do because you said $USER instead of "$USER").
If you're running this script using setuid, you need something safer, like id -u, to get the calling process's legitimate UID regardless of what arbitrary string happens to be in $USER.
If you cover both possibilities by making makestuff.sh like this:
# $SUDO_USER if set, otherwise the current user
caller="${SUDO_USER:-$(id -u)}"
mkdir -p foo/bar/baz
chown -R "$caller" foo
Then you can use it this way:
sudo chown root makestuff.sh
sudo chmod 755 makestuff.sh
# User runs it with sudo
sudo ./makestuff.sh
# User can remove the files
rm -r foo
Or this way (if you want to use setuid so regular users can run the script without having sudo access -- which you probably don't, because you're not being careful enough for that):
sudo chown root makestuff.sh
sudo chmod 4755 makestuff.sh # Danger! I told you not to do this.
# User runs it without sudo
./makestuff.sh
# User can remove the files
rm -r foo

Resources