How to permanently set $PATH on Raspbian GNU/Linux 10 [closed] - bash

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 1 year ago.
Improve this question
I want to store my samba binary path to the global $PATH variable on my rpi4 but it did not work as expected.
I've created a file samba-binary-path.sh in the folder /etc/profile.sh and made it with chmod +x executable.
The file samba-binary-path.sh contains the following:
export PATH=/usr/local/samba/bin/:/usr/local/samba/sbin/:$PATH
Furthermore I have export PATH=/usr/local/samba/bin/:/usr/local/samba/sbin/:$PATH also saved in the file /etc/environment.
Now comes the crazy part. When I execute my script on my cli it work as intended but when it get started from another process the path variable missing my samba binary path.
The affected code block:
#!/bin/bash
BINDIR=$(samba -b | grep 'BINDIR' | grep -v 'SBINDIR' | awk '{print $NF}')
[[ -z $BINDIR ]] && printf "Cannot find the 'samba' binary, is it installed?"
For debbuging purposes I piped the $PATH variable to /var/log/syslog.
Here is the result:
Executed on cli: $PATH=/usr/local/samba/bin/:/usr/local/samba/sbin/:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Executed from other process: $PATH=/sbin:/bin:/usr/sbin:/usr/bin
Where do I specify the path so that the samba binary could be always found?
Regards,
Ronny

How to permanently set $PATH on Raspbian GNU/Linux 10
To explicitly permanently change PATH for all possible environments that do not have PATH is explicitly set, like a new non-interactive non-login shell that does not inherit PATH from the parent process, recompile bash with different value of DEFAULT_PATH_VALUE (there's a ./configure option for it if I remember correctly).
Where do I specify the path so that the samba binary could be always found?
You specify it in your script.
PATH=$PATH:/some/path
# or explicitly
bindir=$(/the/path/to/samba -b ....)
You could also explicitly invoke a login shell when running the script, ergo sourcing /etc/profile* stuff.

Related

echo, printf not work in script when run it as command [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 1 year ago.
Improve this question
I test the script in WSL2
Here is my script:
#!/usr/bin/bash
echo "testing..."
printf "testing..."
It work fine if I run like
bash test
source test
. test
But it output nothing if I add the path the script located in to PATH and run
test
Why and how can I fix it?
test is a bash built-in. POSIX systems will also have a test executable.
When you enter a command without specifying a path to the executable, bash will first check if the command is one of its built-in commands before searching for the executable in the PATH. If the command matches the name of one of the bash built-ins, it will run the built-in.
If you still want to run your script without specifying its path, there are two ways to do it:
Recommended: Rename your file, and then run it with its new name (your script file needs to have its executable permission bit(s) set).
Make sure your script has its file permissions set so that it is executable, make sure your PATH is set up so that your test will be found before the system's test, and then run env test to run your script. env will search your PATH to find your test executable, and then it will execute it.
Ultimately, option 2 is not recommended, because it can be brittle to reorder your PATH, and it can be confusing (for you and for others) to have a second test binary on your system.

Bash - wait for a process (here gcc) [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 7 years ago.
Improve this question
On my Ubuntu machine I want to create a custom command for compiling a c file.
At the moment I have something liks this which does not work like I want to:
#compile the file
gcc $1 -o ~/.compile-c-output
#run the program
./~/.compile-c-output
#delete the output file
rm ~/.compile-c-output
The problem is that the run command is executed before gcc is ready and so the file does not exist. How can I wait until gcc is ready and I can run the file normaly?
Btw how can I add a random number to the output file so this script also works if I run it on two different terminals?
./~/.compile-c-output
Get rid of the leading ./. That's why the file doesn't exist.
~/.compile-c-output
To get a random file name, use mktemp. mktemp guarantees not to overwrite existing files.
file=$(mktemp) # unspecified file name in /tmp
gcc "$1" "$file" && "$file"
rm "$file"

Building Program-Specific Shortcuts in UNIX [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have a program, called carmel, which I can run from the command line via:
carmel -h
or whichever suffix I chose. When loading a file, I can say:
carmel fsa1.fst where fsa1.fst is located in my heme folder, /Users/adam/.
I would prefer to have the default file location be, e.g., /Users/adam/carmel/files, and would prefer to not type that in every time. Is there a way to let UNIX know, when I type carmel to then look in that location?
There is no standard Unix shortcut for this behaviour. Some applications will check an environment variable to see where their files are. but looking at carmel/src/carmel.cc on GitHub, I'd say you'd have to write a wrapper script. Like this:
#!/usr/bin/env bash
# Save as ${HOME}/bin/carmel and ensure ${HOME}/bin is before
# ${carmel_bin_dir} in your ${PATH}. Also ensure this script
# has the executable bit set.
carmel_bin_dir=/usr/local/bin # TODO change this?
working_directory=${CARMEL_HOME-${HOME}/carmel/files}
if [[ ! -d "${working_directory}" ]]; then
echo "${working_directory} does not exist. Creating."
mkdir -p "${working_directory}" || echo "Failed to create ${working_directory}"
fi
pushd "${working_directory}"
echo "Launching ${carmel_bin_dir}/carmel ${#} from $(pwd)..."
${carmel_bin_dir}/carmel ${#}
popd
Alternatively, since the source is freely available, you could add some code to read ${CARMEL_HOME} (or similar) and submit this as a pull request.
Good luck!

tab-expansion and "./" bash shell [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Maybe someone here would be able to help me out. Have installed Ubuntu 12.04 LTS (kubuntu) on two machines. The .bashrc and .bash_profile files are identical as the file structures on each machine is the same.
On machine 1: I run bash scripts within a terminal window with the simple: ./scriptname.sh
On machine 2: I cannot do this and must use: sh scriptname.sh
Nor can I use ./ and tab-complete the script filename.
All executable bits are set correctly, all files and folders have the correct permissions. In the header of the scripts the shebang is set correctly.
Any ideas why this would be occurring?
If I try to execute the script with ./file_motion_grab.sh:
bash: ./file_motion_grab.sh: Permission denied
When I try ls -l, I get:
-rwxrwxrwx 1 adelie adelie 351 Nov 4 20:32 file_motion_grab.sh
Output of getfacl is:
# file: file_motion_grab.sh
# owner: adelie
# group: adelie
user::rwx
group::rwx
other::rwx
More general - any new script on the second machine must be invoked with: sh scriptname.sh Something probably wrong in the .bash files. But not sure where to look.
I would recommend trying ls -al to check the permissions on the file and the directory. Also, try getfacl file.sh, because sometimes there are ACL permissions that override the normal Unix permission bits.
Then I would try head -n 1 file.sh | xxd, to look at the first line, and make sure the shebang is there properly as the first two characters of the file. Sometimes, hidden characters, like a Unicode BOM, can cause it not to be interpreted properly.
Then I would check the permissions on the shell itself. ls -l /bin/bash and getfacl /bin/bash. I would also check to see if this happens with other interpreters; can you use #!/bin/sh for a script? #!/bin/python (or Perl, or Ruby, or something of the sort)? Distinguishing whether this happens only for /bin/bash or for other shells would be helpful.
Also, take a look at ls /proc/sys/fs/binfmt_misc to see if you have any binary formats configured that might interfere with normal interpretation of a shell script.
Try testing from another account as well (if you can). Is the problem unique to your account? I would also try rebooting, in case there is just some transient corruption that is causing a problem (again, if you can).
(answer was originally a series of comments)

Shell script and CRON problems [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 4 years ago.
Improve this question
I've written a backup script for our local dev server (running Ubuntu server edition 9.10), just a simple script to tar & gzip the local root and stick it in a backup folder.
It works fine when I run :
$ bash backups.sh
but it wont work when I run it through crontab.
59 23 * * * bash /home/vnc/backups/backup.sh >> /home/vnc/backups/backup.log 2> $1
I get the error message
/bin/sh: cannot create : nonexistent
The script makes the tar.gz in the folder it is running from (/home/user1), but then tries to copy it to a mounted share (/home/backups, which is really 192.168.0.6/backups) from a network drive, via using fstab.
The mounted share has permissions 777 but the owner and group are different to those running the script.
I'm using bash to run the script instead of sh to get around another issue I've had in the past with "bad substitution" errors
The first 2 lines of the file are
! /bin/bash
cd /home/vnc/backups
I'm probably not supplying enough information to fully answer this post yet but I can post more information as necessary, but I don't really know where to look next.
The clue is in the error message:
/bin/sh: cannot create : nonexistent
Notice that it says "sh". The Bourne shell doesn't support some features that are specific to Bash. If you're using Bash features, then you need to tell Bash to run the script.
Make the first line of your file:
#!/bin/bash
or in your crontab entry do this:
* * * * * /bin/bash scriptname
Without seeing your crontab entry and your script it's hard to be any more specific.
Perhaps the first thing you should do in your backups.sh is insert a cd /home/user1. crond may execute your script from a different directory than you think it does, and forcing it to use the same directory regardless of how it is executed could be a good first start.
Another potentially useful debugging step is to add id > /tmp/id.$$ or something like that, so you can see exactly which user account and groups are being used to run your script.
In crontab, just change 2>$1 to 2>&1. I've just done one myself. Thank you Dennis Williamson.

Resources