CRON and SQLPLUS - oracle

I want to run a script, which contains some sqlplus commands, in cron.
The problem is, that the sqlplus command won't be executed for some reason, when executed in cron. If I execute it by myself, the script runs fine.
I've checked some forums, even the topics here on stackoverflow.com and found some tips regarding the correct setting of environment variables. But even after double checking this, the script doesn't work.
Here is my script:
echo $ORACLE_HOME|grep "oracle" > /dev/null
if [ $? = 1 ] ; then
echo "Setting environment variable"
# Setting oracle environmet
. /usr/oracle/product/10.2.0/.profile
NLS_LANG='AMERICAN_GERMANY.WE8ISO8859P1'
fi
/usr/oracle/product/10.2.0/bin/sqlplus username/password #basics.sql > export.file
basics.sql contains:
set pagesize 0
set feedback off
set heading off
set linesize 400
set NULL nll
SELECT SOME_FIELDS FROM TABLE ORDER BY FIELD;
EXIT;
Any ideas?

shell environment is very important for Oracle and almost not there when using cron. As always there are several ways to solve this.
use full qualified paths - a bit inflexible
make the script to setup it's own execution environment
setup the execution environment in cron, when calling the script.
A pretty much standard way of setting up your environment from withing the script is by using the oraenv script, normally located in /usr/local/bin
ORACLE_SID={your_sid}
ORAENV_ASK=NO
type oraenv >/dev/null 2>&1 || PATH=/usr/local/bin:$PATH
. oraenv
SQLPATH=$HOME/sql
export SQLPATH
do your stuff
from the cron line:
10 10 * * * $HOME/.profile;$HOME/bin/your_script >$HOME/log/your_script.log 2>&1
This assumes that the .profile is not interactive and export the needed environment.

Related

Sqlplus command is not working via plink

I have a shell script in which I am using sqlplus command to fetch the data from database (Installed on Linux). same script is working fine on Linux environment. When I execute the script on window environment via using below batch file.
set ORACLE_TERM=xterm
set ORACLE_BASE=/home/pwcadm/app/pwcadm
set ORACLE_HOME=/home/pwcadm/app/pwcadm/product/11.2.0/client_1
set ORACLE_HOSTNAME=kyora02.kymab.local
set ORACLE_SID=orcl
set ORA_NLS11=$ORACLE_HOME/nls/data
set LANG=en_US.UTF-8
set PATH=/opt/CollabNet_Subversion/bin:/usr/kerberos/sbin:/usr/kerberos/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/X11R6/bin:/root/bin:/home/pwcadm/app/pwcadm/product/11.2.0/client_1/bin:/home/pwcadm/app/pwcadm/product/11.2.0/client_1/bin:/home/pwcadm/app/pwcadm/product/11.2.0/client_1/lib/site_perl/5.8.3/i686-linux-thread-multi:/home/pwcadm/app/pwcadm/product/11.2.0/client_1/perl/lib/5.10.0/x86_64-linux-thread-multi:/home/pwcadm/app/pwcadm/product/11.2.0/client_1/perl/bin:/home/pwcadm/Informatica/PowerCenter9.1.0.3/server/bin
set LD_LIBRARY_PATH=/home/pwcadm/app/pwcadm/product/11.2.0/client_1/lib:/home/pwcadm/app/pwcadm/product/11.2.0/client_1/lib32:/home/pwcadm/Informatica/PowerCenter9.1.0.3/server/bin
cd C:\Program Files\PuTTY
plink csaadm#172.16.122.11 -pw csaadm /app/csa/REG_AUTOMATION/scripts/CHECK_GAN_INSERT.sh
Batch file is prompting the below error message. Even I have set all the paths.
C:\Program Files\PuTTY>plink csaadm#172.16.122.11 -pw csaadm /app/csa/REG_AUTOMA
TION/scripts/CHECK_GAN_INSERT.sh
Error 6 initializing SQL*Plus
SP2-0667: Message file sp1<lang>.msb not found
SP2-0750: You may need to set ORACLE_HOME to your Oracle software directory
FAIL
/app/csa/REG_AUTOMATION/scripts/CHECK_GAN_INSERT.sh: line 27: [: -ne: unary oper
ator expected
CHECK_GAN_INSERT.sh
ACCOUNT_GLOBAL_COUNT=$(echo "
set heading off
set feedback off
set verify off
set trimspool on
set trimout off
set pagesize 0
set space 0
whenever sqlerror exit 2;
SELECT COUNT(*) FROM ACCOUNT_GLOBAL WHERE NAME='test';
"|/home/pwcadm/app/pwcadm/product/11.2.0/client_1/bin/sqlplus -S ${DB_USERNAME}/${DB_PASSWORD}#${CONNECTING_STRING})
EROR_MSG=$(echo ${ACCOUNT_GLOBAL_COUNT} |grep ORA|wc -l)
if [ ${EROR_MSG} -ne 0 ]
then
echo "${ACCOUNT_GLOBAL_COUNT}"
exit;
fi
if [ ${ACCOUNT_GLOBAL_COUNT} -ne 0 ]
then
echo "PASS"
else
echo "FAIL"
fi
Please help.
We haven't enough information about your problem, but here is my guess:
you have to set ORACLE_HOME env. var each time you log in (or add it in auto executable scripts),
you have to extend PATH too (use export PATH=$PATH:$ORACLE_HOME/bin),
also, may be you need to start Oracle instance.
Check these three points and update your question if your problem is still not solved.

how to run a shell script with export command in crontab

I have a shell script that exports values of variables when executed. The same values will be used in another script.
How to run this script(test.sh) in cron.
#!/bin/sh
export I="10"
echo $I
I will be using root access for cron.
I tried this command :
*/5 * * * * /home/ubuntu/backup/.test.sh
I checked with environment variables, nothing is updated.
Why .test.sh if the script is just test.sh?
Anyway... exported variables life ends when the process that set it exit.
In your case the I var disappears when test.sh script exit
If you want to your scripts access to the I value, you have to source the test.sh file (e.g. . /home/ubuntu/backup/test.sh) and not execute it.
Otherwise you can set it into .bashrc file

Export USER env variable for use in cron

I have a script that requires the env variable USER to be set. As the script is used by several users, I can't just do export USER=xxx at the beginning of the script. I could define in the crontab, but I was just wondering whether there is a good way of pulling it in.
I tried sourcing .bashrc and .profile, but neither define USER, plus on Ubuntu .bashrc simply returns on non-interactive shells.
You could work around it by writing at the top of the script (Bashism):
USER=$(whoami)
or old-style:
USER=`whoami`
... assuming you have whoami in the PATH, which can also be set in the crontab just like several (most?) other variables. I.e. you can also set the variable in crontab itself (at least in Vixies cron) - see here for example.
Use the env command. Your crontab entry could look like:
* * * * * env USER=foouser /path/to/script.sh
You can specify environment variable before the command. This way, it won't affect anything else in the crontab.
user.sh:
#!/bin/sh
echo $USER
cli:
USER=foo ./user.sh ## outputs "foo"

How to write a bash script to set global environment variable?

Recently I wrote a script which sets an environment variable, take a look:
#!/bin/bash
echo "Pass a path:"
read path
echo $path
defaultPath=/home/$(whoami)/Desktop
if [ -n "$path" ]; then
export my_var=$path
else
echo "Path is empty! Exporting default path ..."
export my_var=$defaultPath
fi
echo "Exported path: $my_var"
It works just great but the problem is that my_var is available just locally, I mean in console window where I ran the script.
How to write a script which allow me to export global environment variable which can be seen everywhere?
Just run your shell script preceded by "." (dot space).
This causes the script to run the instructions in the original shell. Thus the variables still exist after the script finish
Ex:
cat setmyvar.sh
export myvar=exists
. ./setmyvar.sh
echo $myvar
exists
Each and every shell has its own environment. There's no Universal environment that will magically appear in all console windows. An environment variable created in one shell cannot be accessed in another shell.
It's even more restrictive. If one shell spawns a subshell, that subshell has access to the parent's environment variables, but if that subshell creates an environment variable, it's not accessible in the parent shell.
If all of your shells need access to the same set of variables, you can create a startup file that will set them for you. This is done in BASH via the $HOME/.bash_profile file (or through $HOME/.profile if $HOME/.bash_profile doesn't exist) or through $HOME/.bashrc. Other shells have their own set of startup files. One is used for logins, and one is used for shells spawned without logins (and, as with bash, a third for non-interactive shells). See the manpage to learn exactly what startup scripts are used and what order they're executed).
You can try using shared memory, but I believe that only works while processes are running, so even if you figured out a way to set a piece of shared memory, it would go away as soon as that command is finished. (I've rarely used shared memory except for named pipes). Otherwise, there's really no way to set an environment variable in one shell and have another shell automatically pick it up. You can try using named pipes or writing that environment variable to a file for other shells to pick it up.
Imagine the problems that could happen if someone could change the environment of one shell without my knowledge.
Actually I found an way to achieve this (which in my case was to use a bash script to set a number of security credentials)
I just call bash from inside the script and the spawned shell now has the export values
export API_USERNAME=abc
export API_PASSWORD=bbbb
bash
now calling the file using ~/.app-x-setup.sh will give me an interactive shell with those environment values setup
The following were extracted from 2nd paragraph from David W.'s answer: "If one shell spawns a subshell, that subshell has access to the parent's environment variables, but if that subshell creates an environment variable, it's not accessible in the parent shell."
In case a user need to let parent shell access your new environment variables, just issue the following command in parent shell:
source <your_subshell_script>
or using shortcut
. <your_subshell_script>
You got to add the variable in your .profile located in /home/$USER/.profile
Yo can do that with this command:
echo 'TEST="hi"' >> $HOME/.profile
Or by edit the file with emacs, for example.
If you want to set this variable for all users, you got to edit /etc/profile (root)
There is no global environment, really, in UNIX.
Each process has an environment, originally inherited from the parent, but it is local to the process after the initial creation.
You can only modify your own, unless you go digging around in the process using a debugger.
write it to a temporary file, lets say ~/.myglobalvar and read it from anywhere
echo "$myglobal" > ~/.myglobalvar
Environment variables are always "local" to process execution the export command allow to set environment variables for sub processes. You can look at .bashrc to set environment variables at the start of a bash shell. What you are trying to do seems not possible as a process cannot modify (or access ?) to environment variables of another process.
You can update the ~/.bashrc or ~/.bash_profile file which is used to initialize the environment.
Take a look at the loading behavior of your shell (explained in the manpage, usually referring to .XXXshrc or .profile). Some configuration files are loaded at login time of an interactive shell, some are loaded each time you run a shell. Placing your variable in the latter might result in the behavior you want, e.g. always having the variable set using that distinct shell (for example bash).
If you need to dynamically set and reference environment variables in shell scripts, there is a work around. Judge for yourself whether is worth doing, but here it is.
The strategy involves having a 'set' script which dynamically writes a 'load' script, which has code to set and export an environment variable. The 'load' script is then executed periodically by other scripts which need to reference the variable. BTW, the same strategy could be done by writing and reading a file instead of a variable.
Here's a quick example...
Set_Load_PROCESSING_SIGNAL.sh
#!/bin/bash
PROCESSING_SIGNAL_SCRIPT=./Load_PROCESSING_SIGNAL.sh
echo "#!/bin/bash" > $PROCESSING_SIGNAL_SCRIPT
echo "export PROCESSING_SIGNAL=$1" >> $PROCESSING_SIGNAL_SCRIPT
chmod ug+rwx $PROCESSING_SIGNAL_SCRIPT
Load_PROCESSING_SIGNAL.sh (this gets dynamically created when the above is run)
#!/bin/bash
export PROCESSING_SIGNAL=1
You can test this with
Test_PROCESSING_SIGNAL.sh
#!/bin/bash
PROCESSING_SIGNAL_SCRIPT=./Load_PROCESSING_SIGNAL.sh
N=1
LIM=100
while [ $N -le $LIM ]
do
# DO WHATEVER LOOP PROCESSING IS NEEDED
echo "N = $N"
sleep 5
N=$(( $N + 1 ))
# CHECK PROCESSING_SIGNAL
source $PROCESSING_SIGNAL_SCRIPT
if [[ $PROCESSING_SIGNAL -eq 0 ]]; then
# Write log info indicating that the signal to stop processing was detected
# Write out all relevent info
# Send an alert email of this too
# Then exit
echo "Detected PROCESSING_SIGNAL for all stop. Exiting..."
exit 1
fi
done
~/.bin/SOURCED/lazy script to save and load data as flat files for system.
[ ! -d ~/.megadata ] && mkdir ~/.megadata
function save_data {
[ -z "$1" -o -z "$2" ] && echo 'save_data [:id:] [:data:]' && return
local overwrite=${3-false}
[ "$overwrite" = 'true' ] && echo "$2" > ~/.megadata/$1 && return
[ ! -f ~/.megadata/$1 ] && echo "$2" > ~/.megadata/$1 || echo ID TAKEN set third param to true to overwrite
}
save_data computer engine
cat ~/.megadata/computer
save_data computer engine
save_data computer megaengine true
function get_data {
[ -z "$1" -o -f $1 ] && echo 'get_data [:id:]' && return
[ -f ~/.megadata/$1 ] && cat ~/.megadata/$1 || echo ID NOT FOUND
:
}
get_data computer
get_data computer
Maybe a little off topic, but when you really need it to set it temporarily to execute some script and ended up here looking for answers:
If you need to run a script with certain environment variables that you don't need to keep after execution you could do something like this:
#!/usr/bin/env sh
export XDEBUG_SESSION=$(hostname);echo "running with xdebug: $XDEBUG_SESSION";$#
In my example I just use XDEBUG_SESSION with a hostname, but you can use multiple variables. Keep them separated with a semi-colon. Execution as follows (assuming you called the script debug.sh and placed it in the same directory as your php script):
$ debug.sh php yourscript.php

How to simulate the environment cron executes a script with?

I normally have several problems with how cron executes scripts as they normally don't have my environment setup. Is there a way to invoke bash(?) in the same way cron does so I could test scripts before installing them?
Add this to your crontab (temporarily):
* * * * * env > ~/cronenv
After it runs, do this:
env - `cat ~/cronenv` /bin/sh
This assumes that your cron runs /bin/sh, which is the default regardless of the user's default shell.
Footnote: if env contains more advanced config, eg PS1=$(__git_ps1 " (%s)")$, it will error cryptically env: ": No such file or directory.
Cron provides only this environment by default :
HOME user's home directory
LOGNAME user's login
PATH=/usr/bin:/usr/sbin
SHELL=/usr/bin/sh
If you need more you can source a script where you define your environment before the scheduling table in the crontab.
Couple of approaches:
Export cron env and source it:
Add
* * * * * env > ~/cronenv
to your crontab, let it run once, turn it back off, then run
env - `cat ~/cronenv` /bin/sh
And you are now inside a sh session which has cron's environment
Bring your environment to cron
You could skip above exercise and just do a . ~/.profile in front of your cron job, e.g.
* * * * * . ~/.profile; your_command
Use screen
Above two solutions still fail in that they provide an environment connected to a running X session, with access to dbus etc. For example, on Ubuntu, nmcli (Network Manager) will work in above two approaches, but still fail in cron.
* * * * * /usr/bin/screen -dm
Add above line to cron, let it run once, turn it back off. Connect to your screen session (screen -r). If you are checking the screen session has been created (with ps) be aware that they are sometimes in capitals (e.g. ps | grep SCREEN)
Now even nmcli and similar will fail.
You can run:
env - your_command arguments
This will run your_command with empty environment.
Depending on the shell of the account
sudo su
env -i /bin/sh
or
sudo su
env -i /bin/bash --noprofile --norc
From http://matthew.mceachen.us/blog/howto-simulate-the-cron-environment-1018.html
Answering six years later: the environment mismatch problem is one of the problems solved by systemd "timers" as a cron replacement. Whether you run the systemd "service" from the CLI or via cron, it receives exactly the same environment, avoiding the environment mismatch problem.
The most common issue to cause cron jobs to fail when they pass manually is the restrictive default $PATH set by cron, which is this on Ubuntu 16.04:
"/usr/bin:/bin"
By contrast, the default $PATH set by systemd on Ubuntu 16.04 is:
"/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
So there's already a better chance that a systemd timer is going to find a binary without further hassle.
The downside with systemd timers, is there's a slightly more time to set them up. You first create a "service" file to define what you want to run and a "timer" file to define the schedule to run it on and finally "enable" the timer to activate it.
Create a cron job that runs env and redirects stdout to a file.
Use the file alongside "env -" to create the same environment as a cron job.
Don't forget that since cron's parent is init, it runs programs without a controlling terminal. You can simulate that with a tool like this:
http://libslack.org/daemon/
By default, cron executes its jobs using whatever your system's idea of sh is. This could be the actual Bourne shell or dash, ash, ksh or bash (or another one) symlinked to sh (and as a result running in POSIX mode).
The best thing to do is make sure your scripts have what they need and to assume nothing is provided for them. Therefore, you should use full directory specifications and set environment variables such as $PATH yourself.
The accepted answer does give a way to run a script with the environment cron would use. As others pointed out, this is not the only needed criteria for debugging cron jobs.
Indeed, cron also uses a non-interactive terminal, without an attached input, etc.
If that helps, I have written a script that enables painlessly running a command/script as it would be run by cron. Invoke it with your command/script as first argument and you're good.
This script is also hosted (and possibly updated) on Github.
#!/bin/bash
# Run as if it was called from cron, that is to say:
# * with a modified environment
# * with a specific shell, which may or may not be bash
# * without an attached input terminal
# * in a non-interactive shell
function usage(){
echo "$0 - Run a script or a command as it would be in a cron job, then display its output"
echo "Usage:"
echo " $0 [command | script]"
}
if [ "$1" == "-h" -o "$1" == "--help" ]; then
usage
exit 0
fi
if [ $(whoami) != "root" ]; then
echo "Only root is supported at the moment"
exit 1
fi
# This file should contain the cron environment.
cron_env="/root/cron-env"
if [ ! -f "$cron_env" ]; then
echo "Unable to find $cron_env"
echo "To generate it, run \"/usr/bin/env > /root/cron-env\" as a cron job"
exit 0
fi
# It will be a nightmare to expand "$#" inside a shell -c argument.
# Let's rather generate a string where we manually expand-and-quote the arguments
env_string="/usr/bin/env -i "
for envi in $(cat "$cron_env"); do
env_string="${env_string} $envi "
done
cmd_string=""
for arg in "$#"; do
cmd_string="${cmd_string} \"${arg}\" "
done
# Which shell should we use?
the_shell=$(grep -E "^SHELL=" /root/cron-env | sed 's/SHELL=//')
echo "Running with $the_shell the following command: $cmd_string"
# Let's route the output in a file
# and do not provide any input (so that the command is executed without an attached terminal)
so=$(mktemp "/tmp/fakecron.out.XXXX")
se=$(mktemp "/tmp/fakecron.err.XXXX")
"$the_shell" -c "$env_string $cmd_string" >"$so" 2>"$se" < /dev/null
echo -e "Done. Here is \033[1mstdout\033[0m:"
cat "$so"
echo -e "Done. Here is \033[1mstderr\033[0m:"
cat "$se"
rm "$so" "$se"
Another simple way I've found (but may be error prone, I'm still testing) is to source your user's profile files before your command.
Editing a /etc/cron.d/ script:
* * * * * user1 comand-that-needs-env-vars
Would turn into:
* * * * * user1 source ~/.bash_profile; source ~/.bashrc; comand-that-needs-env-vars
Dirty, but it got the job done for me. Is there a way to simulate a login? Just a command you could run? bash --login didn't work. It sounds like that would be the better way to go though.
EDIT: This seems to be a solid solution: http://www.epicserve.com/blog/2012/feb/7/my-notes-cron-directory-etccrond-ubuntu-1110/
* * * * * root su --session-command="comand-that-needs-env-vars" user1 -l
Answer https://stackoverflow.com/a/2546509/5593430 shows how to obtain the cron environment and use it for your script. But be aware that the environment can differ depending on the crontab file you use. I created three different cron entries to save the environment via env > log. These are the results on an Amazon Linux 4.4.35-33.55.amzn1.x86_64.
1. Global /etc/crontab with root user
MAILTO=root
SHELL=/bin/bash
USER=root
PATH=/sbin:/bin:/usr/sbin:/usr/bin
PWD=/
LANG=en_US.UTF-8
SHLVL=1
HOME=/
LOGNAME=root
_=/bin/env
2. User crontab of root (crontab -e)
SHELL=/bin/sh
USER=root
PATH=/usr/bin:/bin
PWD=/root
LANG=en_US.UTF-8
SHLVL=1
HOME=/root
LOGNAME=root
_=/usr/bin/env
3. Script in /etc/cron.hourly/
MAILTO=root
SHELL=/bin/bash
USER=root
PATH=/sbin:/bin:/usr/sbin:/usr/bin
_=/bin/env
PWD=/
LANG=en_US.UTF-8
SHLVL=3
HOME=/
LOGNAME=root
Most importantly PATH, PWD and HOME differ. Make sure to set these in your cron scripts to rely on a stable environment.
In my case, cron was executing my script using sh, which fail to execute some bash syntax.
In my script I added the env variable SHELL:
#!/bin/bash
SHELL=/bin/bash
I don't believe that there is; the only way I know to test a cron job is to set it up to run a minute or two in the future and then wait.

Resources