Unable to generate FMX in oracle apps r12.1.3 - oracle

I am new to the oracle apps form development.
i am unable to generate .FMX file using below command in putty.
frmcmp_batch.sh module=/u01/install/APPS/apps/apps_st/appl/au/12.0.0/forms/US/EMP.fmb
userid=apps/apps
output_file=/u01/install/APPS/apps/apps_st/appl/po/12.0.0/forms/US/EMP.fmx module_type=form
Please help me on the same.
Thanks&Regards,
Vivek

you may call a script from the command line
$ appCompile.sh EMP.fmb
where appCompile.sh maybe like below one
ORACLE_HOME=/u01/install/APPS/apps/apps_st/appl/au/12.0.0/forms/US; export ORACLE_HOME
export NLS_LANG=american_america.we8iso8859p9 #for Turkish
NLS_DATE_FORMAT=DD/MM/YYYY; export NLS_DATE_FORMAT
FORMS_PATH=/data/aski_kodlar/standard; export FORMS_PATH
alias oh='cd $ORACLE_HOME'
LD_LIBRARY_PATH=/u01/install/APPS/apps/apps_st/appl/au/12.0.0/forms/US/lib:/u01/install/APPS/apps/apps_st/appl/au/12.0.0/forms/US/jdk/jre/lib/sparcv9:
/u01/install/APPS/apps/apps_st/appl/au/12.0.0/forms/US/jdk/jre/lib/sparcv9/server:/u01/install/APPS/apps/apps_st/appl/au/12.0.0/forms/US/jdk/jre/lib/sparcv9/native_threads
export LD_LIBRARY_PATH
export ORACLE_TERM=vt220
export TERM=xterm
type=$2
if test "$type" = ""
then
type=form
fi
echo Compiling Form $1 ....
filename=`echo $1|cut -f1 -d.`
/u01/install/APPS/apps/apps_st/appl/scripts/frmcmp_batch.sh userid=apps/apps#db_name Module_Type=$type compile_all=yes window_state=minimize batch=yes Module=$1

Before compile you must SET the environment variables in linux, it depends on which kind of environment you are logged in, if it is Oracle On-Demand or Custom.
For custom:
Search a file extension .env
Usually located in /u01/oracle/EBS/app, run that file to SET environment variables.
For Oracle On-Demand:
In Linux SSH, run comand below where XXXX is the database
pbrun impdba -u apXXXX
Afer that you must run your compleation script.
Put your promp in
cd $AU_TOP/forms/US
export PATH=$PATH:$AU_TOP/resource:$AU_TOP/forms/US
Run compilation script by replacing APPS_PASSWORD, XXCUST_TOP, XX_FORM_FILE.
frmcmp_batch module=$XXHMS_TOP/forms/US/XX_FORM_FILE.fmb userid=apps/APPS_PASSWORD output_file=$XXCUST_TOP/forms/US/XX_FORM_FILE.fmx compile_all=special batch=yes
It will create a LOG file with .err extension.

this would help :
frmcmp_batch module=/disk5/PROD/apps/apps_st/appl/au/12.0.0/forms/US/EMP.fmb userid=apps/apps output_file=/disk5/PROD/apps/apps_st/appl/ont/12.0.0/forms/US/EMP.fmx module_type=form batch=yes

Related

Where can I store variables and values for current Unix user so that I can use them in SSH and scripts?

I have some variables I use quite frequently to configure and tweak my Ubuntu LAMP server stack but I'm getting tired of having to copy and paste the export command into my SSH window to register the variable and its value.
Essentially I would like to keep my variables and their values in a file inside the user profiles home directory so when I type a command into a SSH window or execute a bash script the variables can be easily used. I don't want to set any system-wide variables as some of these variables are for setting passwords etc.
What's the easiest way of doing this?
UPDATE 1
So essentially I could store the variables and values in a file and then each time I login into a SSH session I call this file up once to setup the variables?
cat <<"EOF" >> ~/my_variables
export foo='bar'
export hello="world"
EOF
ssh root#example.com
$ source ~/my_variables
$ echo "$foo"
$ bar
and then to call the variable from within a script I place source ~/my_variables at the top of the script?
#!/bin/bash
source ~/my_variables
echo "$hello"
Just add your export commands to a file and then run source <the-file> (or . <the-file> for non-bash shells) in your SSH session.

Run bash alias from Apache Hive

I am trying to create an alias on Hadoop machine and run it from Hive JVM.
When I explicitly run the command from Hive with ! prefix it works, however when I add the alias, source the .bashrc file and call the alias from Hive, I get an error. Example:
.bashrc content:
# Environment variables required by hadoop
export JAVA_HOME=/usr/lib/jvm/java-7-oracle
export HADOOP_HOME_WARN_SUPPRESS=true
export HADOOP_HOME=/home/hadoop
export PATH=$PATH:/home/hadoop/bin
alias load-table='java -cp /home/hadoop/userlib/MyJar.jar com.MyClass.TableLoader';
Call on Hive:
!load-table;
Output:
Exception raised from Shell command Cannot run program "load-table": error=2, No such file or directory
Aliases have several limitations compared to shell functions (e.g. by default you cannot call an alias from a non-interactive shell).
Define in your ~/.bashrc:
function load-table() {
# Make sure the java executable is accessible
if which java > /dev/null 2>&1; then
java -cp /home/hadoop/userlib/MyJar.jar com.MyClass.TableLoader
else
echo "java not found! Check your PATH!"
fi
}
export -f load-table # to export the function (BASH specific)
Source you .bashrc to apply the changes. Then, call load-table.

oracle setup in solaris

I am running a batch job which schedules a shell script in solaris.
Each of the scripts have oracle environment variables, such as oracle_home, path, library set within first few lines to run the queries in the script.
Is there any way to automatically get the oracle path picked up when the scripts run?
If I understand your question correctly... You can use oraenv to set the oracle environment.
Here is a basic example:
#!/bin/bash
ORACLE_SID=orcl
. oraenv << EOF >> /dev/null
$ORACLE_SID
EOF
echo $ORACLE_SID
echo $ORACLE_HOME
echo $ORACLE_BASE
echo $PATH
This script gets the Oracle related paths and environment automatically from the oratab.
Please not that oraenv is normally located in /usr/local/bin or /usr/bin.

sh: ...: is not an identifier when trying to invoke shell scripts using plink

Below is my shell script that I am trying to execute using PLINK on MachineB from MachineA(Windows Machine).
#!/bin/bash
export HIVE_OPTS="$HIVE_OPTS -hiveconf mapred.job.queue.name=hdmi-technology"
hive -S -e 'SELECT count(*) from testingtable1' > attachment22.txt
I am using plink to execute the shell script like below,
C:\PLINK>plink uname#MachineB -m test.sh
Using keyboard-interactive authentication.
Password:
Using keyboard-interactive authentication.
Your Kerberos password will expire in 73 days.
And this is the below error I always get whenever I try to run like above.
sh: HIVE_OPTS= -hiveconf mapred.job.queue.name=hdmi-technology: is not
an identifier
Something wrong with my shell script? or some trailing spaces? I am not able to figure it out. I am running PLINK from windows machine
The sh: prefix on the error message indicates that the script is being executed by sh, not bash.
bash lets you combine setting a variable and exporting it into a single command:
export foo=bar
sh, or at least some older versions of it, require these two actions to be separated:
foo=bar ; export foo
A version of sh that doesn't recognize the export foo=bar syntax will interpret the string foo=bar as a variable name (and an illegal one, since it isn't an identifier).
Either arrange for the script to be executed by bash, or change this:
export HIVE_OPTS="$HIVE_OPTS -hiveconf mapred.job.queue.name=hdmi-technology"
to this:
HIVE_OPTS="$HIVE_OPTS -hiveconf mapred.job.queue.name=hdmi-technology"
export HIVE_OPTS
For that matter, since you're referring to $HIVE_OPTS at the very beginning of your script, it's almost certainly already exported, so you could just drop the export.
(You'll also need to avoid any other bash-specific features.)
So why is the system invoking the shell with sh? The #!/bin/bash syntax is specific to Unix-like systems. Windows generally decides how to execute a script based on the file extension; apparently your system is configured to invoke *.sh files using sh. (You could configure your system, using Folder Options, to invoke *.sh files using bash, but that might introduce other problems.)
I think the -m option to plink is for reading commands to execute on the remote machine from a local file. If my comment about line endings doesn't work, try
plink uname#MachineB test.sh
Make sure test.sh is executable by running
chmod +x test.sh
on MachineB.

verbose declare -x from .bashrc

After using Migration Assistant (on OS X) to copy my files form a case sensitive file partition to a case insensitive file partition, my .bashrc has become verbose each time it is run.
#!/bin/bash
#.bashrc file
alias ls='ls -G'
alias sbrc='source ~/.bashrc'
export GNUTERM=x11
export NWCHEM_TOP=~/install/nwchem-6.0-binary
export
PATH = /opt/local/bin:$PATH
...
The output is now
Last login: Mon Apr 30 11:33:33 on ttys005
declare -x Apple_PubSub_Socket_Render="/tmp/launch-oblOxq/Render"
declare -x COMMAND_MODE="unix2003"
declare -x DISPLAY="/tmp/launch-VdU1C8/org.x:0"
declare -x GNUTERM="x11"
...
vencen#dirac:~$
How can I silence bash?
Somehow my .bashrc file received an extra newline character leaving an isolated export
#!/bin/bash
export
PATH=/opt/local/bin:$PATH
#...
The correct file
#!/bin/bash
export PATH=/opt/local/bin:$PATH
#...
does not generate the unwanted output, typing export on the command line does.
Not sure if this is the problem, but I have seen situations where Migration Assistant leaves your home directory not owned by your user account. Instead, your user account is granted all of the usual access via ACLs. You might check that and try fixing it to see if that makes the problem go away.
To check: ls -lde ~
To fix:
sudo chown -R `id -u`:`id -g` ~

Resources