If I make a directory with mkdir -p, it causes problems with scripts
$ mkdir -p test2/test2
$ cd test2/test2
$ echo '#!/bin/sh
> echo hello' > hello.sh
$ ./hello.sh
bash: ./hello.sh: Permission denied
This is nothing to do with mkdir. You simply haven't given hello.sh executable permissions. You need the following:
chmod +x hello.sh
Check your permissions
Check the permissions on your directories and the script itself. There may be a problem there, although it's unlikely.
ls -lad test2/test2
ls -l test2/test2/hello.sh
You can always use the --mode flag with mkdir if your permissions aren't being set correctly for some reason. See chmod(1) and mkdir(1) for more information.
Execute the file directly
You can execute the script with Bash directly, rather than relying on a shebang line or the executable bit, as long as the file is readable by the current user. For example:
bash test2/test2/hello.sh
Change file permissions
If you can execute the file when invoked explicitly with Bash, then you just need to make sure your file has the execute bit set. For example:
chmod 755 test2/test2/hello.sh
Related
Is there a way to prevent a bash or zsh terminal to run a specific command?
Let's say I want to prevent running by mistake a rm on directory ~/foo. So I want to prevent the following command to be run:
rm -r ~/foo
Another example much broader:
mycustomcommand -param1 foo
In the example above, foo can change, but every time one tries to run:
mycustomcommand -param1 anything
It should block and fail.
Therefore, if I run the above command in my local terminal (or a server, etc) the command is blocked (and even better with an error/warning message).
How can we achieve such behavior?
Linux permissions
Linux has users, groups, file permissions and file attributes to restrict what may be done with the each file and this is the simple and reliable way.
E.g. When file cannot be deleted, even by root (without resetting the attributes), set:
$ sudo chattr +i cannot_delete_me.txt
$ rm cannot_delete_me.txt
rm: cannot remove 'cannot_delete_me.txt': Operation not permitted
To disallow removing the file without a question by non-root account, it is enough to set corret file permissions:
$ chmod a-w cannot_delete_me.txt
$ rm cannot_delete_me.txt
rm: remove write-protected regular empty file 'cannot_delete_me.txt'?
# Now it is possible to delete it when typing y + enter
To disallow deleting a file by non-root account without changing the permissions, change parent directory write permission:
$ mkdir foo && cd foo
$ touch cannot_delete_me.txt
$ chmod a-w .
$ rm cannot_delete_me.txt
rm: cannot remove 'cannot_delete_me.txt': Permission denied
Shadowing the command name
Any command may be called in many ways, different paths, so it is not secure to shadow command name:
$ function rm { echo "I am dummy rm function. I will not remove: $1"; }
$ export -f rm
$ rm foo
I am dummy rm function. I will not remove: foo
/bin/rm foo # It will delete foo
Use Docker
When you don't trust the script you are calling, call it inside the Docker container. It allows binding directories from host system with e.g only read permissions, so you are sure that given script will access things you allowed.
Inside the Docker image, you also may replace unwanted Linux commands.
Let's see the example:
$ mkdir my_script_environment_image && cd my_script_environment_image
Create Dockerfile inside my_script_environment_image directory:
FROM debian:latest
RUN echo '#/bin/bash \n\
echo "This is a dummy rm. I will not remove: $1" \n\
' > /bin/rm
Build the image with defined name:
sudo docker build --tag my_script_environment .
Call a temporary (with --rm) container which will run shell or your script.
sudo docker run -it --rm my_script_environment /bin/bash
root#f6d62a754b38:/# touch my_file.txt
root#f6d62a754b38:/# rm my_file.txt
This is a dummy rm. I will not remove: my_file.txt
So your command has been replaced by custom command and whole Docker environment.
In practice, I'm using Docker to control the scope of script/application effects.
I have a bash script that uses bash command parallel
I find that when I run the script as a non root user, parallel still checks /root/ directory for a config file named .parallel
foo.sh
#!/bin/bash
parallel --semaphore --jobs=6 "echo hello"
parallel --semaphore --wait
Running the script as root works as expected. However running as non-root gives this error
chown bob:bob foo.sh
sudo -u bob -g bob ./foo.sh
parallel: Error: Cannot change into non-executable dir /root/.parallel: Permission denied
I've tried using the --plain flag to ignore configs
parallel --semaphore --plain
I've tried using a config pointed at /dev/null
parallel --semaphore -C /dev/null
https://docs.oracle.com/cd/E86824_01/html/E54763/parallel-1.html
How can I make parallel command not check for /root/.parallel config file?
What is the value of $HOME in foo.sh when running?
sudo -u bob -g bob ./foo.sh
If $HOME is not changed to /home/bob but instead remains /root then that will explain what you see.
You can add:
echo $HOME
to foo.sh to check.
Or (as #jared_mamrot says) set $PARALLEL_HOME explictly.
Why does GNU Parallel need a writeable dir?
--semaphore works by creating files in the dir.
We must make sure that $HOME is a writable folder for the user that executes parallel --semaphore (or sem for short), e.g. thus:
export HOME=/tmp && parallel --semaphore <..>
Why? GNU Parallel needs $HOME to be a writeable dir, and tries to write to $HOME, which is by default a root-owned folder (/).
This is because parallel --semaphore needs to create temporary files in some location.
I am using Cygwin Terminal to run shell to execute shell scripts of my Windows 7 system.
I am creating directory , but it is getting created with a dot in name.
test.sh
#!/bin/bash
echo "Hello World"
temp=$(date '+%d%m%Y')
dirName="Test_$temp"
dirPath=/cygdrive/c/MyFolder/"$dirName"
echo "$dirName"
echo "$dirPath"
mkdir -m 777 $dirPath
on executing sh test.sh its creating folder as Test_26062015 while expectation is Test_26062015.Why are these 3 special charterers coming , how can I correct it
Double quote the $dirPath in the last command and add -p to ignore mkdir failures when the directory already exists: mkdir -m 777 -p "$dirPath". Besides this, take care when combining variables and strings: dirName="Test_${temp}" looks better than dirName="Test_$temp".
Also, use this for static analysis of your scripts.
UPDATE: By analyzing the debug output of sh -x, the issue appeared due to DOS-style line-endings in the OP's script. Converting the file to UNIX format solved the problem.
I've been creating mac shell executables with this method:
Create a file;
Add #!/bin/sh;
Add the script;
Run chmod 755 nameofscript.
I now need to create a shell script to create a shell script in another directory and make it executable (with chmod) so that it can be run on startup.
#!/bin/sh
dir=/tmp
fnam=someshellscript
echo '#!/bin/sh' > $dir/$fnam
echo 'find /bin -name "*X*"' >> $dir/$fnam
chmod 755 $dir/$fnam
#!/bin/sh
echo "script goes here!" > /path/to/place
chmod 755 /path/to/place
?
myscript.sh is
#/!bin/sh
mkdir -p $1
cp -p a.txt ./$1
cp b.txt /usr
If I invoke it with sudo ./myscript.sh, the directory $1 is owned by root, so the user can't modify a.txt (which is a problem). I could change the script to
#/!bin/sh
mkdir -p $1
cp -p a.txt ./$1
sudo cp b.txt /usr
and invoke with just ./myscript.sh but I get the impression this is bad practice. How to proceed in the general case, where I don't know the user, so chown doesn't help?
SUDO_USER environment variable is set by sudo to the name of the user who invoked sudo. You can use it for chown.
As of bad practices, you don't check anything and your $1 argument substitution is broken for filenames with spaces. If that much doesn't matter, should you care for the rest?
You should add this line
chmod ug+rw a.txt
With this one, the user will have read/write permissions on 'a.txt'.