Actually, I have a shell script which calls the informatica workflow. but i want to add a functionality in script to catch the data error while processing of data in workflow if required, and give the error message on screen like (error is coming due to wrong data .please refer the logs). Currently log is generated but i am unable to show screen message by using shell script.
below is command to call the workflow
pmcmd startworkflow -sv CSA_DEV_INT -d Domain_CSADevelopment -u Administrator -p Administrator -f Sumit -wait wf_ERROR_LOG_TESTING
pwc_status=$?
but the value of pwc_status is coming as 0 whereas I processed the wrong data. and informatica logs catch the error.
As long as the pmcmd call itself is successful (i.e. the server is found, the user can be authenticated, the workflow starts) it will return 0, even if there are errors while processing data. Use the the getworkflowdetails or gettaskdetails commands of the pmcmd utility to obtain details related to the workflow execution.
For more information about these commands see the Command Reference - you can find it in the Informatica installation directory on your server or download from Informatica My Support site (you need to be a registered user).
Related
I have written a small shell script to automate the Big SQL and HIVE synchronization.
Code is as below
echo "Login to BigSql"
<path to>/jsqsh bigsql --user=abc --password=pwd
echo "login succesfull"
echo "Syncing hive table <tbl_name> to Big SQL"
call syshadoop.hcat_sync_objects('DB_name','tbl_name','a','REPLACE','CONTINUE');
echo "Syncing hive table TRAINING_TRACKER to Big SQL Successfully"
Unfortunately, I am getting the message:
Login to BigSql
Welcome to JSqsh 4.8
Type "\help" for help topics. Using JLine.
And then it enters the Big SQL command prompt. Now when I type "quit" and hit enter, it gives me following messages:
login succesful
Syncing hive table <tbl_name> to Big SQL
./script.sh: line 10: call syshadoop.hcat_sync_objects(DB_name,tbl_name,a,REPLACE,CONTINUE): command not found
What am I doing wrong?
You would need to redirect the output of your later commands into the jsqsh command. E.g. see this example
You can start JSqsh and run the script at the same time with this command:
/usr/ibmpacks/common-utils/current/jsqsh/bin/jsqsh bigsql < /home/bigsql/mySQL.sql
from here https://www.ibm.com/support/knowledgecenter/en/SSCRJT_5.0.2/com.ibm.swg.im.bigsql.doc/doc/bsql_jsqsh.html
There already is an auto-hcat sync job in Big SQL that does exactly what you're trying to do
Check if the job is running by
su - bigsql (or whatever instance owner)
db2 connect to bigsql
db2 "select NAME, BEGIN_TIME, END_TIME, INVOCATION, STATUS from
SYSTOOLS.ADMIN_TASK_STATUS where BEGIN_TIME > (CURRENT TIMESTAMP - 60 minutes)
and name ='Synchronise MetaData Changes from Hive' "
if you don't see an output , simply enable it through Ambari :
Enable Automatic Metadata Sync
I have a script lying into a Unix server which looks like this:
mainScript.sh
#some stuff here
emailScript.sh $PARAM_1 $PARAM_2
#some other stuff here
As you can see, mainScript.sh is calling another script called emailScript.sh.
The emailScript.sh is supposed to perform a query via sqlplus, then parse the results and return them via email if any.
The interesting part of the code in emailScript.sh is this:
DB_SERVER=$1
USERNAME=$2
PASSWORD=$3
EVENT_DATE=$4
LIST_AUTHORIZED_USERS=$5
ENVID=$6
INTERESTED_PARTY=$7
RAW_LIST=$(echo "select distinct M_OS_USER from MX_USER_CONNECTION_DBF where M_EVENT_DATE >= to_date('$EVENT_DATE','DD-MM-YYYY') and M_OS_USER is not null and M_OS_USER not in $LIST_AUTHORIZED_USERS;" | sqlplus -s $USERNAME/$PASSWORD#$DB_SERVER)
As you can see, all I do is just creating the variable RAW_LIST executing a query with sqlplus.
The problem is the following:
If I call the script mainScript.sh via command line (PuTTy / KiTTy), the sqlplus command works fine and returns something.
If I call the script mainScript.sh via an external job (a ssh connection opened on the server via a Jenkins job), the sqlplus returns nothing and takes 0 seconds, meaning it doesn't even try to execute itself.
In order to debug, I've printed all the variables, the query itself in order to check if something wasn't properly set: everything is correctly set.
It really seems that the command sqlplus is not recognized, or something like this.
Would you please have any idea on how I can debug this? Where should I look the issue?
You need to consider few things here. While you are running the script, from which directory location you are executing the script? And while you are executing the script from your external application from which directory location it is executing the script. Better use full path to the script like /path/to/the/script/script.sh or use cd /path/to/the/script/ command to go to the script directory first and execute the script. Also check execute permission for your application. You as an user might have permission to execute the script or sql command but your application does not have that permission. Check the user id for your application and add that into the proper group.
We are using CA Workload Control Center (Autosys) to run a windows batch file which calls another batch file which calls SQLPlus with the following:
sqlplus username/password %1 %2 %3
Since this is all being done remotely I cannot see what is actually happening on the server.
I am wondering if the sqlplus file exists but has invalid syntax (like referring to non-existent table) will the sqlplus prompt exit and return back to the batch file?
The reason I ask is because I am looking at the log file from Autosys and it shows that SQLPlus call was made, it prints out ERROR AT LINE 2: Invalid Table, but then it does not show any other activity with the batch script after that where there are multiple echoes and file copies etc. It seems as though it is not exiting SQLPlus perhaps?
Is there a parm I need to pass to SQLPlus to tell it to exit SQLPlus and return back to the calling script after running a SQL script if it fails?
Edit: We are using "WHENEVER SQL ERROR" inside of our SQL files as well and the log file does show this:
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
But I am still expecting that it should continue with the rest of the Batch script but its not, above is the last that Autosys shows in the log
See SQLPlus instruction WHENEVER SQLERROR, it describes in Oracle docs:
http://docs.oracle.com/cd/B19306_01/server.102/b14357/ch12052.htm
Found the solution SO Post:
You can do sqlplus -l test/Test#mydatabase #script.sql; the -l flag means it will only try to connect once, and if it fails for any reason will exit instead of prompting. Look at the output of sqlplus -?, or see the documentation.
SO this is a bit of an odd request but hoping someone on here knows some command line fu. Might have to post to serverfault too, we'll see.
I'm trying to figure out how i can pass the results of a curl request to the mysql command line application. So basically something kinda like this -
mysql --user=root --password=my_pass < (curl http://localhost:3000/application.sql)
where that URL returns basically a text response with sql statements.
Some context:
An application I am developing supports multiple installations, as part of the installation process for a new instance we spin up a copy of our "data" database for the new instance.
I'm trying to automate the deployment process as much as possible so I built a small "dashboard" app in rails that can generate the sql statements, config files, etc for each instance and also helps us see stats about the instances and other fun stuff. Now I'm writing capistrano tasks to actually do a deployment based on the ID of the installation which i pass in as a variable.
The initial deployment setup includes creating the applications database, which this sql request will do. I could in theory pull the file in a wget request, execute and delete it but I thought it would be cleaner to just tell the remote server to curl request it and execute it in one step.
So any ideas?
I'm fairly certain the syntax you have originally won't work as the '<' expects a file. Instead you want to pipe the output of curl, which by default prints to STDOUT to mysql.
I believe the following will work for you.
curl http://localhost:3000/application.sql | mysql --user=root --password=my_pass
In Bash, you can do process substitution:
mysql ... < <(curl ...)
I'm trying to execute the command "file /directory/*" through the web, using Ajax that call a perl script.
When I'm running the script from the server I get the mime type correctly, but when i'm using the web that trigger the ajax, i'm getting "application/x-empty".
If i'm running the command from the server using "sudo -u apache perl_script.pl" - the result is correct.
Why from the Ajax I get a different response ?
Try without the asterisk but instead with a complete filename.