I'm trying to trigger Oozie job through shell script. But on execution of shell script am getting the below error:
"command not found" error in the line: ooziejob =$(oozie job -oozie
http://oozieserver:port/oozie -config
/root/SqoopWrapper1/sqoop_job.properties -run);
My shell script consisting of oozie command is;
input=/root/SqoopWrapper1/InputFile.txt
echo "internal field sep"
IFS='|'
while read SourceDB db_name Mysql_table hdfsdir libpath
do
echo "do...while"
if [ SourceDB = Mysql ]
then
driver = com.mysql.jdbc.Driver
jdbcUri = jdbc:mysql://host:3306
Mysql_table = WrapperTbl
UserName = ****
Password = ****
fi
echo "Oozie command exe"
ooziejob =$(oozie job -oozie http://oozieserver:port/oozie -config /root/SqoopWrapper1/sqoop_job.properties -run);
echo $ooziejob;
done < $input
exit 0
You have a space before the equal-sign.
BTW, if you post this kind of questions, you should always say what shell and what OS you are using.
Related
I am trying to login into a server 100.18.10.182 and triggering my spark submit job in the server 100.18.10.36 from .182 server in Apache Airflow. I have used BashOperator (a shell script to ssh into 100.18.10.182 server) and for the spark submit job, I have used SparkSubmitOperator as a downstream to BashOperator.
I am able to execute the BashOperator successfully but the SparkOperator fails with:
Cannot execute: Spark submit
I think this is because I am unable to pass the session of my SSH (of .182 server) into the next SparkSubmitOperator or it may be due to some other issue related to --jars or --packages, not sure here.
I was thinking to use xcom_push to push some data from my BashOperator and xcom_pull into the SparkSubmitOperator but not sure how to pass it in a way that my server is logged in and then my SparkSubmitOperator gets triggered from that box itself?
Airflow dag code:
t2 = BashOperator(
task_id='test_bash_operator',
bash_command="/Users/hardikgoel/Downloads/Work/airflow_dir/shell_files/airflow_prod_ssh_script.sh ",
dag=dag)
t2
t3_config = {
'conf': {
"spark.yarn.maxAppAttempts": "1",
"spark.yarn.executor.memoryOverhead": "8"
},
'conn_id': 'spark_default',
'packages': 'com.sparkjobs.SparkJobsApplication',
'jars': '/var/spark/spark-jobs-0.0.1-SNAPSHOT-1/spark-jobs-0.0.1-SNAPSHOT.jar firstJob',
'driver_memory': '1g',
'total_executor_cores': '21',
'executor_cores': 7,
'executor_memory': '48g'
}
t3 = SparkSubmitOperator(
task_id='t3',
**t3_config)
t2 >> t3
Shell Script code:
#!/bin/bash
USERNAME=hardikgoel
HOSTS="100.18.10.182"
SCRIPT="pwd; ls"
ssh -l ${USERNAME} ${HOSTS} "${SCRIPT}"
echo "SSHed successfully"
if [ ${PIPESTATUS[0]} -eq 0 ]; then
echo "successfull"
fi
I have a script which is in Autosys Job : JOB_ABC_S1
command : /ABC/script.sh
Scrpt.sh code
grep -w "ABC" /d/file1.txt
status=$?
if [ $status -eq 0 ]
then
echo "Passed"
else
echo "Failed"
exit 1
fi
My issue is even if the script failed or pass , the AutoSys job is marked as SU SUCCESS
I don't want it to mark it as success , if script fail's .. it should mark AutoSys as FA and if script pass then mark job to SU SUCCESS
What should i change in the script to make it happen ?
Job :
insert_job : JOB_ABC_S1
machine : XXXXXXXXXXX
owner : XXXXXXXX
box_name : BOX_ABC_S1
application : XXXX
permission : XXXXXXXXXXX
max_run_alarm : 60
alarm_if_fails : y
send_notification : n
std_out_file : XXXXX
std_err_file : XXXXX
command : sh /ABC/script.sh
At first look all seems to be fine.
However, i would suggest a script modification which you can try out.
By default Autosys fails the jobs if the exit code is non-zero unless specified.
JOB JIL seems to be fine.
Please update your script as below and check for 2 things:
Executed job EXIT-CODE: either it should be 1 or 2. We are trying to fail the job in both the cases.
Str log files
Script:
#!/bin/sh
srch_count=$(grep -cw ABC /d/file1.txt)
if [ $srch_count -eq 0 ]; then
echo "Passed"
#exit 0
exit 2
else
echo "Failed"
exit 1
fi
This way we can confirm if the exit code is correctly being captured by Autosys.
I can not run /etc/init.d/dbora.
When running through the terminal it reports the following problem:
Shell
[root#localhost init.d]# ./dbora start Starting... Processing Database
instance "ORA11G": log file
/ora01/app/oracle/product/11.2.0/db_1/startup.log Environment variable
ORACLE_UNQNAME not defined. Please set ORACLE_UNQNAME to database
unique name.
My User Linux: oracle
Script
!/bin/bash
# versao: 1.0
export TMP=/tmp
export ORACLE_HOSTNAME=centos7.dbaora.com
export ORACLE_UNQNAME=oracle
export ORACLE_BASE=/ora01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1
export ORACLE_SID=ORA11G
export ORACLE_OWNER=oracle
PATH=/usr/sbin:$PATH:$ORACLE_HOME/bin
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib;
export CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib;
alias cdob='cd $ORACLE_BASE'
alias cdoh='cd $ORACLE_HOME'
alias tns='cd $ORACLE_HOME/network/admin'
alias envo='env | grep ORACLE'
umask 022
start(){
echo "Starting..."
su - $ORACLE_OWNER -c "$ORACLE_HOME/bin/dbstart $ORACLE_HOME"
su - $ORACLE_OWNER -c "$ORACLE_HOME/bin/emctl start dbconsole"
touch /var/lock/subsys/dbora
}
stop(){
echo "Stopping..."
su - $ORACLE_OWNER -c "$ORACLE_HOME/bin/emctl stop dbconsole"
su - $ORACLE_OWNER -c "$ORACLE_HOME/bin/dbshut $ORACLE_HOME"
rm -f /var/lock/subsys/dbora
}
restart(){
stop
start
}
usage(){
echo "usage: $0 {start|stop|restart}"
}
if [ `id -u` -ne 0 ]
then
echo "Este script deve ser executado como root"
exit
fi
case $1 in
'start') start;;
'stop') stop;;
'restart') restart;;
*) usage;;
esac
ORACLE_UNQNAME is an OS environmental variable used by Oracle Enterprise Manager; it supports managing multiple databases from one OEM instance.
It looks like you haven't set a value yourself, probably because you only have the one database so it's already unique, right :) Nevertheless you need to give it a different value from oracle: orcl is traditional and will do the trick. In Linux you can set it from the command line using export like any other environment variable, or just change the value in your script.
Hi I'm running CRON JOB with Laravel
Function declaration in Laravel
protected function schedule(Schedule $schedule)
{
echo "test CRON JOB\n";
$file1 = '1.log';
$file2 = '2.log';
$schedule->command('command1')->sendOutputTo($file1);
$schedule->command('command2')->sendOutputTo($file2);
}
CRON JOB - Setting
pathToArtisan schedule:run 2>&1 >> /home/log/cron_output.log
Log file output (cron_output.log)
test CRON JOB
Running scheduled command: '/opt/alt/php55/usr/bin/php' 'artisan'command1 > '1.log' 2>&1 &
Running scheduled command: '/opt/alt/php55/usr/bin/php' 'artisan' command2 > '2.log' 2>&1 &
The echo in the function schedule is displayed but the ones inside my command 1 and command 2 are not.
I tried
echo "test"; $this->info('test');
No files 1.log or 2.log where created neither /home/log/ or where the Kernel.php file is or Command folder
Any ideas ?
Thank you
You should use the built-in task output method in Laravel.
For example:
$file = 'command1_output.log';
$schedule->command('command1')
->sendOutputTo($file);
Everything is ok now.
$schedule->command('command1')
->sendOutputTo($file);
and inside your command
$this->info('test')
The files are created in the root folder of my application so I didn't see them !
Thank you
$schedule->command('my:command')
->daily()
->onSuccess(function () {
// The task succeeded...
//set flag here and store to DB
})
->onFailure(function () {
// The task failed...
//set flag here and store to DB
});
Show list in admin dashboard so that you can check logs using these flags(cron status)
I have a bash script that gets info from Heroku so that I can pull a copy of my database. That script works fine in cygwin. But to run it in cron it halts because the shell that it uses halts as Heroku's authentication through Heroku Toolbelt.
Here is my crontab:
SHELL=/usr/bin/bash
5 8-18 * * 1-5 /cygdrive/c/Users/sam/work/push_db.sh >>/cygdrive/c/Users/sam/work/output.txt
I have read the Googles and the man page within cygwin to come up with this addition:
#!/usr/bin/bash
. /home/sam.walton/.profile
echo $SHELL
curl -H "Accept: application/vnd.heroku+json; version=3" -n https://api.heroku.com/
#. $HOME/.bash_profile
echo `heroku.bat pgbackups:capture --expire`
#spawn heroku.bat pgbackups:capture --expire
expect {
"Email:" { send -- "$($HEROKU_LOGIN)\r"}
"Password (typing will be hidden):" { send -- "$HEROKU_PW\r" }
timeout { echo "timed out during login"; exit 1 }
}
sleep 2
echo "first"
curl -o latest.dump -L "$(heroku.bat pgbackups:url | dos2unix)"
Here's the output from the output.txt
/usr/bin/bash
{
"links":[
{
"rel":"schema",
"href":"https://api.heroku.com/schema"
}
]
}
Enter your Heroku credentials. Email: Password (typing will be hidden): Authentication failed. Enter your Heroku credentials. Email: Password (typing will be hidden): Authentication failed. Enter your Heroku credentials. Email: Password (typing will be hidden): Authentication failed.
As you can see it appears that the output is not getting the result of the send command as it appears it's waiting. I've done so many experiments with the credentials and the expect statements. All stop here. I've seen few examples and attempted to try those out but I'm getting fuzzy eyed which is why I'm posting here. What am I not understanding?
Thanks to comments, I'm reminded to explicitly place my env variables in .bashrc:
[[ -s $USERPROFILE/.pik/.pikrc ]] && source "$USERPROFILE/.pik/.pikrc"
export HEROKU_LOGIN=myEmailHere
export HEROKU_PW=myPWhere
My revised script per #Dinesh's excellent example is below:
. /home/sam.walton/.bashrc echo $SHELL echo $HEROKU_LOGIN curl -H "Accept: application/vnd.heroku+json; version=3" -n https://api.heroku.com/
expect -d -c " spawn heroku.bat pgbackups:capture --expire --app gw-inspector expect {
"Email:" { send -- "myEmailHere\r"; exp_continue}
"Password (typing will be hidden):" { send -- "myPWhere\r" }
timeout { puts "timed out during login"; exit 1 } } " sleep 2 echo "first"
This should work but while the echo of the variable fails, giving me a clue that the variable is not being called, I am testing hardcoding the variables directly to eliminate that as a variable. But as you can see by my output not only is the echo yielding nothing, there is no clue that any diagnostics are being passed which makes me wonder if the script is even being called to run from expect, as well as the result of the spawn command. To restate, the heroku.bat command works outside the expect closure but the results are above. The result of the command directly above is:
/usr/bin/bash
{
"links":[
{
"rel":"schema",
"href":"https://api.heroku.com/schema"
}
]
}
What am I doing wrong that will show me diagnostic notes?
If you are going to use the expect code inside your bash script, instead of calling it separately, then you should have use the -c flag option.
From your code, I assume that you have the environmental variables HEROKU_LOGIN and HEROKU_PW declared in the bashrc file.
#!/usr/bin/bash
#Your code here
expect -c "
spawn <your-executable-process-here>
expect {
# HEROKU_LOGIN & HEROKU_PW will be replaced with variable values.
"Email:" { send -- "$HEROKU_LOGIN\r";exp_continue}
"Password (typing will be hidden):" { send "$HEROKU_PW\r" }
timeout { puts"timed out during login"; exit 1 }
}
"
#Your further bash code here
You should not use echo command inside expect code. Use puts instead. The option of spawning the process inside expect code will be more robust than spawning it outside.
Notice the use of double quotes with the expect -c flag. If you use single quotes, then bash script won't do any form of substitution. So, if you need bash variable substitution, you should use double quotes for the expect with -c flag.
To know about usage of -c flag, have a look at here
If you still have any issue, you can debug by appending -d with the following way.
expect -d -c "
our code here
"