I have a bash script that uses bash command parallel
I find that when I run the script as a non root user, parallel still checks /root/ directory for a config file named .parallel
foo.sh
#!/bin/bash
parallel --semaphore --jobs=6 "echo hello"
parallel --semaphore --wait
Running the script as root works as expected. However running as non-root gives this error
chown bob:bob foo.sh
sudo -u bob -g bob ./foo.sh
parallel: Error: Cannot change into non-executable dir /root/.parallel: Permission denied
I've tried using the --plain flag to ignore configs
parallel --semaphore --plain
I've tried using a config pointed at /dev/null
parallel --semaphore -C /dev/null
https://docs.oracle.com/cd/E86824_01/html/E54763/parallel-1.html
How can I make parallel command not check for /root/.parallel config file?
What is the value of $HOME in foo.sh when running?
sudo -u bob -g bob ./foo.sh
If $HOME is not changed to /home/bob but instead remains /root then that will explain what you see.
You can add:
echo $HOME
to foo.sh to check.
Or (as #jared_mamrot says) set $PARALLEL_HOME explictly.
Why does GNU Parallel need a writeable dir?
--semaphore works by creating files in the dir.
We must make sure that $HOME is a writable folder for the user that executes parallel --semaphore (or sem for short), e.g. thus:
export HOME=/tmp && parallel --semaphore <..>
Why? GNU Parallel needs $HOME to be a writeable dir, and tries to write to $HOME, which is by default a root-owned folder (/).
This is because parallel --semaphore needs to create temporary files in some location.
Related
Is there a way to prevent a bash or zsh terminal to run a specific command?
Let's say I want to prevent running by mistake a rm on directory ~/foo. So I want to prevent the following command to be run:
rm -r ~/foo
Another example much broader:
mycustomcommand -param1 foo
In the example above, foo can change, but every time one tries to run:
mycustomcommand -param1 anything
It should block and fail.
Therefore, if I run the above command in my local terminal (or a server, etc) the command is blocked (and even better with an error/warning message).
How can we achieve such behavior?
Linux permissions
Linux has users, groups, file permissions and file attributes to restrict what may be done with the each file and this is the simple and reliable way.
E.g. When file cannot be deleted, even by root (without resetting the attributes), set:
$ sudo chattr +i cannot_delete_me.txt
$ rm cannot_delete_me.txt
rm: cannot remove 'cannot_delete_me.txt': Operation not permitted
To disallow removing the file without a question by non-root account, it is enough to set corret file permissions:
$ chmod a-w cannot_delete_me.txt
$ rm cannot_delete_me.txt
rm: remove write-protected regular empty file 'cannot_delete_me.txt'?
# Now it is possible to delete it when typing y + enter
To disallow deleting a file by non-root account without changing the permissions, change parent directory write permission:
$ mkdir foo && cd foo
$ touch cannot_delete_me.txt
$ chmod a-w .
$ rm cannot_delete_me.txt
rm: cannot remove 'cannot_delete_me.txt': Permission denied
Shadowing the command name
Any command may be called in many ways, different paths, so it is not secure to shadow command name:
$ function rm { echo "I am dummy rm function. I will not remove: $1"; }
$ export -f rm
$ rm foo
I am dummy rm function. I will not remove: foo
/bin/rm foo # It will delete foo
Use Docker
When you don't trust the script you are calling, call it inside the Docker container. It allows binding directories from host system with e.g only read permissions, so you are sure that given script will access things you allowed.
Inside the Docker image, you also may replace unwanted Linux commands.
Let's see the example:
$ mkdir my_script_environment_image && cd my_script_environment_image
Create Dockerfile inside my_script_environment_image directory:
FROM debian:latest
RUN echo '#/bin/bash \n\
echo "This is a dummy rm. I will not remove: $1" \n\
' > /bin/rm
Build the image with defined name:
sudo docker build --tag my_script_environment .
Call a temporary (with --rm) container which will run shell or your script.
sudo docker run -it --rm my_script_environment /bin/bash
root#f6d62a754b38:/# touch my_file.txt
root#f6d62a754b38:/# rm my_file.txt
This is a dummy rm. I will not remove: my_file.txt
So your command has been replaced by custom command and whole Docker environment.
In practice, I'm using Docker to control the scope of script/application effects.
I'm trying to make this bash script but get this: Error reading *.docx. The file doesn’t exist
Here's the script:
#!/bin/bash
textutil -convert txt *.docx
cat *.txt | wc -w
I'm currently running it from the folder but I'd like to make it a global script I can just call from any current folder.
If you want to make it available on your whole system you need to move it to a bin location like so
chmod a+rx yourscript.sh && sudo mv yourscript.sh /usr/local/bin/yourscript
then you can use it like a normal script in any folder
I executed the following command :
cd /mnt/c/Users/Daniel/Documents/Assg/ | cat file.txt
my question is why doesn't it change directory?. The output file.txt is displayed but the directory is not changed. I understand that if we execute the same command in the following order, it won't work because cd changes directory in a child process, so the net result is the same.
cat file.txt | cd /mnt/c/Users/Daniel/Documents/Assg/
Try just cd /mnt/c/Users/Daniel/Documents/Assg/
As was already stated, the following:
cd /mnt/c/Users/Daniel/Documents/Assg/
should do the trick, but I'd like to go a bit more into why the command you presented doesn't work as expected. In Bash (and other shells), you can have multiple "subshells" running under a parent shell. each of these subshells has its own working directory. When you run commands in a pipeline, as you have done, a subshell is created. The working directory of the subshell was changed, but that didn't have any effect on the shell you were working in.
It depends on the shell you use
When you run two commands in a pipeline, typically one or both of the commands is run in a separate child process.
In older shells this would be both, in later shells this can be either
the first or the last.
At one point, the ksh93 team decided to make the last command in the pipeline the parent. This would prevent race conditions, and if the command was a builtin, it allows it to run inside the current shell
process and preserve the results of the pipeline.
Nevertheless, cd is a command that does not consume or produce any input or output (except for diagnostics on stderr), and using it in a pipeline
by itself is just silly. A better, because more predictable, command line would be:
cd /mnt/c/Users/Daniel/Documents/Assg/ && cat file.txt
This will assure that cat only runs if cd succeeds, and will then
show the contents of file.txt from the given directory.
You have different options.
Perform cat after trying to change dir
cd /mnt/c/Users/Daniel/Documents/Assg/ ; cat file.txt
Perform cat only when change dir worked
cd /mnt/c/Users/Daniel/Documents/Assg/ && cat file.txt
Perform cat in the other directory, but return to the current dir when finished.
(cd /mnt/c/Users/Daniel/Documents/Assg/ && cat file.txt)
# or
cat /mnt/c/Users/Daniel/Documents/Assg/file.txt
EDIT:
Your question: "why doesnt cd /mnt/c/Users/Daniel/Documents/Assg/ | cat file.txt, change directory?." can be answered two ways.
The technical explanation is given by #Henk (The pipe introduces a subshell, and environ settings in a subshell get lost when the shell exits).
The functional explanation is that you used the wrong syntax for what you are trying to accomplish.
I am using Cygwin Terminal to run shell to execute shell scripts of my Windows 7 system.
I am creating directory , but it is getting created with a dot in name.
test.sh
#!/bin/bash
echo "Hello World"
temp=$(date '+%d%m%Y')
dirName="Test_$temp"
dirPath=/cygdrive/c/MyFolder/"$dirName"
echo "$dirName"
echo "$dirPath"
mkdir -m 777 $dirPath
on executing sh test.sh its creating folder as Test_26062015 while expectation is Test_26062015.Why are these 3 special charterers coming , how can I correct it
Double quote the $dirPath in the last command and add -p to ignore mkdir failures when the directory already exists: mkdir -m 777 -p "$dirPath". Besides this, take care when combining variables and strings: dirName="Test_${temp}" looks better than dirName="Test_$temp".
Also, use this for static analysis of your scripts.
UPDATE: By analyzing the debug output of sh -x, the issue appeared due to DOS-style line-endings in the OP's script. Converting the file to UNIX format solved the problem.
If I make a directory with mkdir -p, it causes problems with scripts
$ mkdir -p test2/test2
$ cd test2/test2
$ echo '#!/bin/sh
> echo hello' > hello.sh
$ ./hello.sh
bash: ./hello.sh: Permission denied
This is nothing to do with mkdir. You simply haven't given hello.sh executable permissions. You need the following:
chmod +x hello.sh
Check your permissions
Check the permissions on your directories and the script itself. There may be a problem there, although it's unlikely.
ls -lad test2/test2
ls -l test2/test2/hello.sh
You can always use the --mode flag with mkdir if your permissions aren't being set correctly for some reason. See chmod(1) and mkdir(1) for more information.
Execute the file directly
You can execute the script with Bash directly, rather than relying on a shebang line or the executable bit, as long as the file is readable by the current user. For example:
bash test2/test2/hello.sh
Change file permissions
If you can execute the file when invoked explicitly with Bash, then you just need to make sure your file has the execute bit set. For example:
chmod 755 test2/test2/hello.sh