Using Ansible playbook to install Oracle12c - ansible

I developed a ansible playbook to automate the installation of Oracle 12c Release 2 on CentOS7 and I get a full install. However I cannot start the database instance with startup, I generate the dbca.rsp from a template using the ansible vars in the template to generate it with the command:
- name: create database
command: '{{ oracle_home }}/bin/dbca -silent -createDatabase -sid {{ oracle_sid }} -templateName General_Purpose.dbc -responseFile {{ installation_folder }}/dbca.rsp'
The expected result would be the normal installation and the creation of the file initSID.ora but I'm only getting the init.ora file which as no SID attached to it with out being in it's core and thus I'm not able to start a instance of the database.
When I use sqlplus / as sysdba to connect as the SYSDBA to have privileges ans use startup or STARTUP PFILE ='/oracle/app/oracle/product/12201/dbhome_1/dbs/init.ora';
I get the errors:
ORA-01261: Parameter db_recovery_file_dest destination string cannot be translated
ORA-01262: Stat failed on a file destination directory
Linux-x86_64 Error: 2: No such file or directory
Creating a new PFILE or SPFILE will return the same errors
I suspect is the dbca.rsp that is not generating a good startup file since the others response files are generating the install setup (db_install.rsp) and the linstener (netca.rsp) and the data on those files are being fullfiled correctly.
Note that the templates that I'm using are extracted from a clean and fresh install from oracle12c Release 2 where only the variables contain as values of the variables from the playbook itself like the SID, install location, inventory location, ...

Related

Oracle Database XE - setting SID during installation

Is there any possibility of changing the default SID of Oracle Database during installation from RPM? I want to do this without X-Window-System. I want to do this during configuration - when i invoke this command:
/etc/init.d/oracle-xe-18c configure SidParameter
i want to give SID as a parameter.

upgrading oracle 11 to oracle 18

Is there a way to upgrade Oracle11 database to Oracle 18XE without uninstalling Oracle 11 ? I searched from Oracle forums and website but I could not find any Readme file which tells how to upgrade it ?
I will be grateful if you can help me
Cheers
Upgrading oracle database will never mandate you to uninstall the source binary. You can just install the target binary(18XE) in any location and upgrade. Make sure you follow the proper steps and have a complete DB backup, if at all something goes wrong. You should run the following script to check the status and readiness of the database:
cd $ORACLE_HOME/rdbms/admin/
sqlplus '/ as sysdba'
spool dbupgrade_info.log
#dbupgdiag.sql
spool off
this gives the current status of the database like the components and invalid objects. Make sure you do not have any invalid components and invalid objects in SYS/SYSTEM Schema.
Install the target binaries, and then execute pre-upgrade script from source home :
$SOURCE_HOME/jdk/bin/java -jar $TARGET_HOME/rdbms/admin/preupgrade.jar FILE TEXT DIR <output_dir>
The required scripts will be generated in the .
Now you can shut the DB & Listener down and change the environment variables pointing to the target home, copy the pfile to the target location and then,
sqlplus "/ as sysdba"
startup nomount
shutdown immediate;
That was just to make sure that the pfile is working fine.
Now you can start the actual DB upgrade(Make sure this is in target(18XE) environment):
cd $ORACLE_HOME/rdbms/admin
sqlplus '/ as sysdba'
startup upgrade;
exit
cd $ORACLE_HOME/bin
./dbupgrade
Now run the postupgrade_fixups.sql that will be in the .
As simple as that. Remember to configure the tnsnames.ora and linstener.ora in $TNS_ADMIN locations. And then startup the listener.

Has anyone tried oracle export : EXPDP on the remote machine?

I am trying to export a dump file and log file on a remote machine using oracle expdp.
However i am getting the following error :
Connected to: Oracle Database 11g
Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing
options
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 536
ORA-29283: invalid file operation
Command run on remote machine host-name 'Local' using oracle client are :
SQL> create directory expdp_dir as '/vault2/expdp_dir';
SQL> grant read,write on directory expdp_dir to dbuser;
expdp dbuser/dbpwd#SID SCHEMAS=dbuser DIRECTORY=expdp_dir DUMPFILE=testDB24NOV17.dmp logfile=testDB24NOV17.log EXCLUDE=STATISTICS
Note vault 2 is mounted on a remote machine with hostname 'Local'. The database is on a machine with hostname TestDB.
The OS is RHEL6.
Any thoughts /ideas on making this operation successful would be appreciated.
Please check this one:
as per Oracle Doc.ID Doc ID 1305166.1
The errors can have multiple causes. Known causes are listed below.
One of the usual reasons for this problem to occur is when the listener process has not been started under the same account as the database instance service. The listener forks the new server process and when this runs under a different security context as the database, then access to directories and files are likely impacted.
Please verify the following information:
1) the output of:
ps -ef | grep SMON
2) the output of:
ps -ef | grep tnslsnr
3) the output of:
ps -ef|grep LIST
4) the output of:
ls -ld
Note:
When using ASM, the listener may have been started from the ASM Home instead of the RDBMS Home. Depending on your security settings, this may give to this issue.
One more:
4. Directory path/folder exists but create directory is executed by a different user in the database and the import is run by a different user.
Solution:
1. Make sure the listener and instance services are started from the same account.
Make sure that the directory are shared between nodes so that the directory can be accessed on any instance, or, create a folder similar to the other nodes locally, if there is already a folder created locally on the all the node with the same file directory path structure check if the permission are correct.
Make sure the folder exist has specified in during creation in the "CREATE DIRECTORY" syntax command.
Grant the required permission to the importing user to use the directory.
grant read, write on directory to ;
If above 4 possible causes and solutions are not applicable at your end, please check if the user has proper permission to export to run utl_file package.
Hope it helps.

How to repair these errors related to environment variable?

I'm trying to migrate an oracle database to postgresql, I run the ora2pg command and i get these errors:
root#ubuntu:~# ora2pg
DBI connect('host=localhost;sid=XE;port=1521','HR',...)
failed: ERROR OCIEnvNlsCreate. Check ORACLE_HOME (Linux) env var or PATH (Windows)
and or NLS settings, permissions, etc. at /usr/local/share/perl/5.24.1/Ora2Pg.pm
line 1491.
FATAL: -1 ... ERROR OCIEnvNlsCreate. Check ORACLE_HOME (Linux) env var or PATH
(Windows) and or NLS settings, permissions, etc.
Aborting export...
This step is important to start the migration of the database knowing that I configured ORACLE_HOME variable such as LD_LIBRARY_PATH.

unixODBC Data source name not found, and no default driver specified

I'm trying to connect to a db2 server from my Laravel application. Since Laravel doesn't support db2 out of the box, I tried using this package https://github.com/cooperl22/laravel-db2, which requires me to install odbc driver.
So far, I've been able to install odbc using the following command:
apt-get install php-odbc
However, it seems like my /etc/odbc.ini and /etc/odbcinst.ini configuration are still wrong. and here's the full error message when I tried to run php artisan migrate:
[PDOException]
SQLSTATE[IM002] SQLDriverConnect: 0 [unixODBC][Driver Manager]Data source name not found, and no default
driver specified
Here's my /etc/odbc.ini:
[db2]
Description=DB2 Server
Driver=db2
Database=mydb
and here's my /etc/odbcisnt.ini:
[db2]
Description = DB2 database access
Driver = /opt/ibm/db2/V10.5/lib64/libdb2.so
FileUsage = 1
DontDLClose = 1
Ensure that your environment variables are set correctly. As shown on this link, make sure the following is set:
export DB2INSTANCE=db2inst1
isql -v sample db2inst1 ibmdb2
Exerpt from the unixodbc.org page I linked to above:
Then when it comes to connecting, you MUST have the environment
variable DB2INSTANCE set to a vaild db2 instance, so for instance to
connect with isql
Both of your config files look correct to me. (The isql part was just an example to test connectivity.)

Resources