Shell script and CRON problems [closed] - bash

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 4 years ago.
Improve this question
I've written a backup script for our local dev server (running Ubuntu server edition 9.10), just a simple script to tar & gzip the local root and stick it in a backup folder.
It works fine when I run :
$ bash backups.sh
but it wont work when I run it through crontab.
59 23 * * * bash /home/vnc/backups/backup.sh >> /home/vnc/backups/backup.log 2> $1
I get the error message
/bin/sh: cannot create : nonexistent
The script makes the tar.gz in the folder it is running from (/home/user1), but then tries to copy it to a mounted share (/home/backups, which is really 192.168.0.6/backups) from a network drive, via using fstab.
The mounted share has permissions 777 but the owner and group are different to those running the script.
I'm using bash to run the script instead of sh to get around another issue I've had in the past with "bad substitution" errors
The first 2 lines of the file are
! /bin/bash
cd /home/vnc/backups
I'm probably not supplying enough information to fully answer this post yet but I can post more information as necessary, but I don't really know where to look next.

The clue is in the error message:
/bin/sh: cannot create : nonexistent
Notice that it says "sh". The Bourne shell doesn't support some features that are specific to Bash. If you're using Bash features, then you need to tell Bash to run the script.
Make the first line of your file:
#!/bin/bash
or in your crontab entry do this:
* * * * * /bin/bash scriptname
Without seeing your crontab entry and your script it's hard to be any more specific.

Perhaps the first thing you should do in your backups.sh is insert a cd /home/user1. crond may execute your script from a different directory than you think it does, and forcing it to use the same directory regardless of how it is executed could be a good first start.
Another potentially useful debugging step is to add id > /tmp/id.$$ or something like that, so you can see exactly which user account and groups are being used to run your script.

In crontab, just change 2>$1 to 2>&1. I've just done one myself. Thank you Dennis Williamson.

Related

echo, printf not work in script when run it as command [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 1 year ago.
Improve this question
I test the script in WSL2
Here is my script:
#!/usr/bin/bash
echo "testing..."
printf "testing..."
It work fine if I run like
bash test
source test
. test
But it output nothing if I add the path the script located in to PATH and run
test
Why and how can I fix it?
test is a bash built-in. POSIX systems will also have a test executable.
When you enter a command without specifying a path to the executable, bash will first check if the command is one of its built-in commands before searching for the executable in the PATH. If the command matches the name of one of the bash built-ins, it will run the built-in.
If you still want to run your script without specifying its path, there are two ways to do it:
Recommended: Rename your file, and then run it with its new name (your script file needs to have its executable permission bit(s) set).
Make sure your script has its file permissions set so that it is executable, make sure your PATH is set up so that your test will be found before the system's test, and then run env test to run your script. env will search your PATH to find your test executable, and then it will execute it.
Ultimately, option 2 is not recommended, because it can be brittle to reorder your PATH, and it can be confusing (for you and for others) to have a second test binary on your system.

Command runs in terminal, but doesn't work in script [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 3 years ago.
Improve this question
On Linux Mint, the command "timedatectl" works as expected from terminal, but doesn't work from the bash script. I think I made some very stupid mistake, as I'm a complete newbie in scripting, but can't find it myself.
#!/bin/bash
timedatectl set-timezone UTC
timedatectl status
prints out
Failed to set time zone: Invalid time zone 'UTC'
Unknown operation status
The same commands work just fine from terminal, and setting the timezone and printing out times and time settings.
UPD:
which timedatectl
prints out
/usr/bin/timedatectl
, but adding this full path to the calls in the script doesn't change the output.
Trying to set another timezone succeeds from the terminal (timezone changes), but fails from the script. (Failed to set time zone: Invalid time zone 'Europe/Paris')
echo $PATH returns
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
The system doesn't ask for password to execute these commands from the terminal.
UPD2: I've found what was wrong, thanks to the comment by Whydoubt.
I was calling the script like
bash script1.sh
but after the comment I tried to call it like
./script1.sh
and got
bash: ./script1.sh: /bin/bash^M: bad interpreter: No such file or directory
So I realised the problem was with the newline symbol (the original script was created on a Windows machine with Notepad++). After retyping the script with xed, everything started to work as expected.
The output of uname -a, hostname, hostid and hostnamectl, who, groups, stat /usr/share/zoneinfo and stat /usr/share/zoneinfo/UTC are the same from the console and from a script. /usr/share/zoneinfo/UTC is accessible from the script. I think I don't need to show the output as it's rather lengthy and my mistake is already found.
Should I write an answer myself to make the question answered and completed?

Security of su root in bash script [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I have a bash script which runs commands that require root privileges. I'm trying to decide between setting "su root" at the start of the script, or running each command prefixed with "sudo". What are the pros and cons of these methods, and which is more secure? Or is there a better method to use? Thanks!
sudo is better for security. If you have any vulnerabilities in your script, then those can be exploited if you are running as root. By using sudo, your are limiting your holes only to the scripts you call. So, assuming the scripts you call are secure, then using sudo in your script will be secure as well.
The best and safest method to use is to actually call the script with sudo. eg. sudo scriptName. Putting su root or sudo commandInScript are essentially doing the same thing. Calling for root access in the script. Rather than having the script run as root when you call the script.
I agree with trenin that sudo would be the best way to do it.But it might be annoying to type sudo in front of every command. You may as well
do stuff
change to root
do super user stuff
change back to regular user
Also if you are running a script as root, having part of it failing may cause unexpected behaviour. I would recommend adding the 'set -e' flag since bash will exit immediately if any command exits with non-zero. You may undo this effect with 'set +e'.
Another way would be, run your whole script as root (su) but allow access to it only to certain user as described here by DaveParillo
Hope it helped

tab-expansion and "./" bash shell [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Maybe someone here would be able to help me out. Have installed Ubuntu 12.04 LTS (kubuntu) on two machines. The .bashrc and .bash_profile files are identical as the file structures on each machine is the same.
On machine 1: I run bash scripts within a terminal window with the simple: ./scriptname.sh
On machine 2: I cannot do this and must use: sh scriptname.sh
Nor can I use ./ and tab-complete the script filename.
All executable bits are set correctly, all files and folders have the correct permissions. In the header of the scripts the shebang is set correctly.
Any ideas why this would be occurring?
If I try to execute the script with ./file_motion_grab.sh:
bash: ./file_motion_grab.sh: Permission denied
When I try ls -l, I get:
-rwxrwxrwx 1 adelie adelie 351 Nov 4 20:32 file_motion_grab.sh
Output of getfacl is:
# file: file_motion_grab.sh
# owner: adelie
# group: adelie
user::rwx
group::rwx
other::rwx
More general - any new script on the second machine must be invoked with: sh scriptname.sh Something probably wrong in the .bash files. But not sure where to look.
I would recommend trying ls -al to check the permissions on the file and the directory. Also, try getfacl file.sh, because sometimes there are ACL permissions that override the normal Unix permission bits.
Then I would try head -n 1 file.sh | xxd, to look at the first line, and make sure the shebang is there properly as the first two characters of the file. Sometimes, hidden characters, like a Unicode BOM, can cause it not to be interpreted properly.
Then I would check the permissions on the shell itself. ls -l /bin/bash and getfacl /bin/bash. I would also check to see if this happens with other interpreters; can you use #!/bin/sh for a script? #!/bin/python (or Perl, or Ruby, or something of the sort)? Distinguishing whether this happens only for /bin/bash or for other shells would be helpful.
Also, take a look at ls /proc/sys/fs/binfmt_misc to see if you have any binary formats configured that might interfere with normal interpretation of a shell script.
Try testing from another account as well (if you can). Is the problem unique to your account? I would also try rebooting, in case there is just some transient corruption that is causing a problem (again, if you can).
(answer was originally a series of comments)

Why is my expect script failing on line 1? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
The very first line of my expect script fails. Here are the entire contents of my script and it fails:
#!/usr/bin/expect -f
And it fails right off the bat with
": no such file or directory
as my response. Expect is in fact installed and is located in /usr/bin/ and I am running this from root. I have no extra spaces or lines before the # sign either. Of course there was more to the script originally but it fails way before it gets to the good stuff.
Tried it and here is the result: /usr/bin/expect^M: bad interpreter
Is it possible that there's a Windows newline (the ^M) in there that's confusing the script? You can try od to see what newline character(s) is after the expect and tofromdos or an editor (e.g. emacs in hexl-mode) to remove it. See the man pages for more info.
I had this issue and found I didn't have the expect interpreter installed! Oddly enough, if you ran the command in the shell it worked. However, through a shell script I got this error:
/usr/bin/expect: bad interpreter: No such file or directory
I fixed it by simply installing the Expect interpreter. The package name that was chosen was: expect libtcl8.6
Just run:
sudo apt-get install expect
Your line endings are wrong. Shove it through dos2unix or tr -d '\r'.
I don't really know expect, to be honest, but when I run that on my system it "works" fine. Nothing happens, but that's what I'd expect. I don't get any error message. According to the man page,
#!/usr/bin/expect -f
is the correct way to start your script. Expect then slurps up the script you are executing as the cmdfile.
The way I got it to reproduce the problem was to actually put a ^M at the end of the line instead of a normal newline (saw Bert F's response and that prompted me to try it). I'm sure vim's :set list command will show any odd characters.
If you observe the error, there was a windows newline character, that is added because its copied from windows machine via mail or winscp. So to avoid this error copy the script using scp linux to linux and execute the script.

Resources