Permission denied while creating directory/file in shell script - bash

I have a command in my shell script (increment.sh):
TMP_FILE=/tmp/sbg_clickstream.tmp
hive -e "select * from $HIVE_TMP_TABLE;" > $TMP_FILE
I am getting an error on the 2nd line:
/tmp/sbg_clickstream.tmp: Permission denied
Error: Error occured while opening data file
Error: Load Failed, records not inserted.
Load failed (exit code 1)
I tried chmod 750 /tmp/sbg_clickstream.tmp and ran the script again but I am still getting the same error.
I am new to shell scripting and I thought the dir/file is created when TMP_FILE=/tmp/sbg_clickstream.tmp. However after the test above, concluded it does not.
I think the dir/file is created during this line:
hive -e "select * from $HIVE_TMP_TABLE;" > $TMP_FILE
How can I change the permissions while the query is populating and creating the file?

Two questions:
What user is running the script?
Who owns the file /tmp/sbg_clickstream.tmp?
If the user running the script owns the file, then the problem may be with the shell setting itself.
Type set -o from the command line, and check the value of clobber or noclobber. If noclobber is set to on or clobber is set off, then you cannot overwrite files via the redirection. You'll need to set or unset clobber/noclobber.
If your shell has a value for clobber (like the Kornshell), you need to do this:
$ set -o clobber # Turns clobber on
If your shell has a value for noclobber (like BASH), you need to do this:
$ set +o noclobber # Turns noclobber off
Yes, the -o/+o parameters seem backwards.
Hint:
Instead of using the same file name over and over, try this:
TMP_FILE=/tmp/sbg_clickstream.$$.tmp
hive -e "select * from $HIVE_TMP_TABLE;" > $TMP_FILE
The $$ represents the pid, and thus changes all the time. When the system reboots, it should clean out the /tmp directory. Otherwise the PID climbs and shouldn't repeat. This way, each run generates a new temp file, and you don't have to worry about clobbering or not.
Even Better Hint
See if your system has a mktemp command. This will generate a unique temporary file name:
TMP_FILE=$(mktemp -t sbg_clickstream.XXXXX)
echo "The tempfile is '$TMP_FILE'
hive -e "select * from $HIVE_TMP_TABLE;" > $TMP_FILE
This may echo something like this:
The tempfile is /tmp/sbg_clickstream.Ds23d
mktemp is guaranteed to create a valid and unique temporary file name.

chmod/chown is your friend but you need to ensure that the user/group running the shell script has permission to write to the /tmp directory.
What is the output of the following:
ls -ld /tmp
whoami
groups

Related

Variables are cleared after using a sudo command in shell script

I am writing a script for deployment.For that i need to login and then do the procedure.I have logged in successfully and trying to become sudo user.But after doing that , all the variables stored in the script are cleared, if i use them after sudo command.If i use them before sudo command , i am able to see its value.
# ! /bin/bash
proj=$1 #getting from other script , value is lvtools
echo varibles are :"${proj}" #proj is having value
sudo -Hiu lvadmin
ls
path=/home/lvadmin/lvsvnprojects/QAUat/"${proj}" #path formed in correctly as proj value is empty
echo path after admin is : "${proj}" #value is EMPTY
cd $path
ls
If code works correctly, it should change directory to specified location.
Firstly, you need to export your var:
export proj=$1
Then you can use -E flag for sudo command, if it is allowed for you:
-E, --preserve-env
Indicates to the security policy that the user wishes to preserve their
existing environment variables. The security policy may return an error
if the user does not have permission to preserve the environment.
Your code should looks like
# ! /bin/bash
export PROJ=$1 #getting from other script , value is lvtools
echo varibles are :"${PROJ}" #PROJ is having value
sudo -EHiu lvadmin
ls
path=/home/lvadmin/lvsvnprojects/QAUat/"${PROJ}" #path formed in correctly as proj value is empty
echo path after admin is : "${PROJ}" #value is not EMPTY
cd $path
ls

Bash Script Cant Write To Log Files

I've created a simple bash script that grabs some data and then outputs it to a log file. When I run the script without sudo it fails to write to the logs and says they are write-protected. It then ask me if it should unwrite-protect them, but this fails (permission denied).
If I run the script as sudo it appears to work without issue. How can I set these log file to be available to the script?
cd /home/pi/scripts/powermonitor/
python /home/pi/powermonitor/plugpower.py > plug.log
echo -e "$(sed '1d' /home/pi/scripts/powermonitor/plug.log)\n" > plug.log
sed 's/^.\{139\}//' plug.log > plug1.log
rm plug.log
grep -o -E '[0-9]+' plug1.log > plug.log
rm plug1.log
sed -n '1p' plug.log > plug1.log
rm plug.log
perl -pe '
I was being dumb. I just needed to set the write permissions on the log files.
The ability to write a file depends on the file permissions that have been assigned to that file or, if the file does not exist but you want to create a new file, then the permissions on the directory in which you want to write the file. If you use sudo, then you are temporarily becoming the root user, and the root user can read/write/execute any file at all without restriction.
If you run your script first using sudo and the script ends up creating a file, that file is probably going to be owned by the root user and will not be writable by your typical user. If you run your script without using sudo, then it's going to run under the username you used to connect to the machine and that user will need to have permission to write the log files.
You can change the ownership and permissions of directories and files by using the chown, chmod, chgrp commands. If you want to always run your script as sudo, then you don't have much to worry about. If you want to run these commands without sudo, that means you're running them as some other user and you will need to grant write permission to that user, whoever it is, in order to write the files/folders where the log files get written.
For instance, if I wanted to run the script as user sneakyimp and wanted the files written to /home/sneakyimp/logs/ then I'd need to make sure that directory was writable by sneakyimp:
sudo chown -R sneakyimp:sneakyimp /home/sneakyimp/logs
This command changes ownership of that directory and its contents to the user sneakyimp. You might also need to run some chmod commands to make sure they are writable by owner.

Old version of script is run unless invoked with "sh scriptname"

I'm making a small edit to a shell script I use to mask password inputs like so:
#!/bin/bash
printf "Enter login and press [ENTER]\n"
read user
printf "Enter password and press [ENTER]\n"
read -s -p pass
With the read -s -p pass being the updated part. For some reason I'm not seeing the changes when I run it normally by entering script.sh into the command line but I do see the changes when I run sh script.sh. I've tried opening new terminal windows, and have run it in both ITerm and the default Mac terminal. I'm far from a scripting master, does anyone know why I'm not seeing the changes without the prefix?
Use a full or relative path to the script to make sure you're running what you think you're running.
If you are running it as simply script.sh then the shell will PATH environment variable lookup to locate it. To see which script.sh bash would be using in that case, run type script.sh.
Relative Path
./script.sh
Full Path
/path/to/my/script.sh

Env variables not being picked up by script

Creating a script to pass to a few different people and ran into an env problem. The script wouldn't run unless I supplied it with $PATH, $HOME, and $GOPATH at the beginning of the file. Like so:
HOME=/home/Hustlin
PATH=/home/Hustlin/bin:/home/Hustlin/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/go/bin:/bin:/home/Hustlin/go/bin
export GOPATH=$HOME/go
export PATH=$PATH:$GOROOT/bin:$GOPATH/bin
This is not advantageous when trying to pass the script around and each person has to set these variables themselves. This file would rarely be run by the User and would most often be run via crontab.
I would love to hear a better way of coding this so I'm not asking everyone I send the script to update these variables.
Thank you all in advance!!!
EDIT
The script is being run via crontab with no special permissions.
1,16,31,46 * * * * /home/Hustlin/directory1/super_cool_script.sh
Here is the script I am running:
#!/bin/bash
# TODO Manually put your $PATH and $HOME here.
PATH=/home/Hustlin/bin:/home/Hustlin/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/go/bin:/bin:/home/Hustlin/go/bin
HOME=/home/Hustlin
export GOPATH=$HOME/go
export PATH=$PATH:$GOROOT/bin:$GOPATH/bin
# Field1
field1="foo"
# Welcome message.
echo Starting the update process...
# Deposit directory.
mkdir -p $HOME/directory1/sub1/data/body
mkdir -p $HOME/directory1/sub2/system
# Run command
program1 command1
# Run longer command.
program1 command2 field1
sleep 3
program1 command3 -o $HOME/directory1/sub1/data $field1
sleep 1
# Unzip and discard unnecessary files.
unzip $HOME/directory1/sub1/data/$field1 -d $HOME/directory1/sub1/data
rm $HOME/directory1/sub1/data/bar.yaml $HOME/dircetory1/sub1/data/char.txt
rm $HOME/directory1/sub1/data/$field1.zip
# Rename
mv $HOME/directory1/sub1/data/body.json $HOME/directory1/sub1/data/body/$(date -d '1 hour ago' +%d-%m-%Y_%H).json
echo Process complete.
I changed most of the program and command names for privacy. What I did post still represents what is being done and how the files are being moved.
The issue is crontab, not the script.
When you run the script on your terminal, you are logged in a session with all environment variables set, so the script can use it.
But when you run it from crontab it an "empty" session, so it does not have any environment variable set, it doesn't even know about your user.
Run the script on crontab like this:.
su --login Hustlin /home/Hustlin/directory1/super_cool_script.sh
Check this documentation.
http://man7.org/linux/man-pages/man1/su.1.html
bash -l -c /path/to/script will make bash execute all .bashrc and .profile files first, so it will have HOME and PATH variables set.

Bash script failing when run by cron - mktemp outputting nothing

I have a shell script, that works when I run it manually, but silently fails when run via cron. I've trimmed it down to a very minimal example:
#!/usr/bin/env bash
echo "HERE:"
echo $(mktemp tmp.XXXXXXXXXX)
If I run that from the command line, it outputs HERE: and a new temporary filename.
But if I run it from a cron file like this, I only get HERE: followed by an empty line:
SHELL=/bin/bash
HOME=/
MAILTO=”me#example.com”
0 5 * * * /home/phil/test.sh > /home/phil/cron.log
What's the difference? I've also tried using /bin/mktemp, but no change.
The problem is that the script tries to create the temporary file in root directory when it is started from cron and it has no permission to do that.
The cron configuration file contains HOME=/. The current directory is / when the script starts. And the template passed to mktemp contains file name only so mktemp tries to create the temporary file in current directory and it is /.
$ HOME=/
$ cd
$ mktemp tmp.XXXXXXXXXX
mktemp: failed to create file via template ‘tmp.XXXXXXXXXX’: Permission denied

Resources