I'm unable to successfully run dbca silently in a docker container.
First, I installed the Oracle software using runInstaller, then root.sh, and netca. When I run dbca, I always get the following error:
DBCA_PROGRESS : 50%
[ 2017-12-21 21:49:18.914 UTC ] ORA-29283: invalid file operation
ORA-06512: at "SYS.DBMS_QOPATCH", line 1547
ORA-06512: at "SYS.UTL_FILE", line 536
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 41
ORA-06512: at "SYS.UTL_FILE", line 478
ORA-06512: at "SYS.DBMS_QOPATCH", line 1532
ORA-06512: at "SYS.DBMS_QOPATCH", line 1417
ORA-06512: at line 1
The alert log says
QPI : Found directory objects and ORACLE_HOME out of sync
QPI : Trying to patch with the current ORACLE_HOME
QPI: ------QPI Old Directories -------
QPI: OPATCH_SCRIPT_DIR:/ade/b/2717506464/oracle/QOpatch
QPI: OPATCH_LOG_DIR:/ade/b/2717506464/oracle/QOpatch
QPI: OPATCH_INST_DIR:/ade/b/2717506464/oracle/OPatch
QPI: op_scpt_path /u01/app/oracle/product/12.2.0/dbhome_1/QOpatch
QPI: Unable to find proper QPI install
QPI: [1] Please check the QPI directory objects and set them manually
QPI: OPATCH_INST_DIR not present:/ade/b/2717506464/oracle/OPatch
Unable to obtain current patch information due to error: 20013, ORA-20013: DBMS_QOPATCH ran mostly in non install area
ORA-06512: at "SYS.DBMS_QOPATCH", line 777
ORA-06512: at "SYS.DBMS_QOPATCH", line 532
ORA-06512: at "SYS.DBMS_QOPATCH", line 2247
and the trace log
[Thread-66] [ 2017-12-22 17:21:42.931 UTC ] [ClonePostCreateScripts.executeImpl:508] calling dbms_qopatch.replace_logscrpt_dirs()
[Thread-75] [ 2017-12-22 17:21:43.178 UTC ] [BasicStep.handleNonIgnorableError:509] oracle.sysman.assistants.util.SilentMessageHandler#3b2b52b7:messageHandler
[Thread-75] [ 2017-12-22 17:21:43.178 UTC ] [BasicStep.handleNonIgnorableError:510] ORA-29283: invalid file operation
ORA-06512: at "SYS.DBMS_QOPATCH", line 1547
ORA-06512: at "SYS.UTL_FILE", line 536
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 41
ORA-06512: at "SYS.UTL_FILE", line 478
ORA-06512: at "SYS.DBMS_QOPATCH", line 1532
ORA-06512: at "SYS.DBMS_QOPATCH", line 1417
ORA-06512: at line 1
Then I tried to use Oracle's official images with no success.
The only thing I modified in the Oracle's image creation process is createAsContainerDatabase parameter in dbca.rsp file. The original value was true and I changed it to false because I do not want to create a CDB.
Any idea what do I do incorrectly?
EDIT:
The image build fails on docker host running on Fedora 25, Kernel Version: 4.10.10-200.fc25.x86_64.
On macOS, and Debian Jessie, Kernel Version: 3.16.0-4-amd64, the dbca runs successfully.
Which storage driver you use?
I had exactly the same issue with Solus 3, kernel 4.14.8-41.current
Docker version:
Server:
Version: 17.11.0-ce
API version: 1.34 (minimum version 1.12)
Go version: go1.9.2
Git commit: 7cbbc92838236e442de83d7ae6b3d74dd981b586
Built: Sun Nov 26 16:15:47 2017
OS/Arch: linux/amd64
Experimental: false
..
Storage Driver: overlay
Backing Filesystem: extfs
Supports d_type: true
The image i used works fine on Linux Mint (docker 11, storage driver: aufs).
So I tried to change "overlay" to "overlay2" in settings, and now it works.
Server Version: 17.11.0-ce
Storage Driver: overlay2
Backing Filesystem: extfs
...
Creating and starting Oracle instance
35% complete
40% complete
44% complete
49% complete
50% complete
53% complete
55% complete
Completing Database Creation
56% complete
57% complete
58% complete
62% complete
65% complete
66% complete
Executing Post Configuration Actions
100% complete
But I have no idea why it's not wotking with "overlay"...
Related
SQLcl: Release 22.3 Production auf Fr. Nov. 04 17:19:43 2022
SQL> apex export -applicationid 1681
Exporting Application 1681
java.sql.SQLException: ORA-06502: PL/SQL: numeric or value error
ORA-06512: in "APEX_220100.WWV_FLOW_EXPORT_API", row 143
ORA-06512: in "APEX_220100.WWV_FLOW_GEN_API2", row 10218
ORA-06512: in "SYS.DBMS_ASSERT", row 493
ORA-06512: in "SYS.DBMS_ASSERT", row 583
ORA-06512: in "APEX_220100.WWV_FLOW_GEN_API2", row 10194
ORA-06512: in "APEX_220100.WWV_FLOW_EXPORT_INT", row 1234
ORA-06512: in "APEX_220100.WWV_FLOW_EXPORT_API", row 81
I did not find anything related to APEX export and ORA-06502 on the web.
Tried switching to SQLcl version to 20.3 and from 21.4
That didn't change anything and I got the same error in these version, too.
It seems to be some environment problem, as co-workers are able to export applications from the same database. When I try to export this application from the APEX App Builder it is working.
The problem is caused by the NLS parameter NLS_NUMERIC_CHARACTERS.
For switzerland this is ".'", so I guess the ' is the problem here.
This will resolve the problem:
alter session set NLS_NUMERIC_CHARACTERS = ',.';
We are running 12.1.0.2 OEE
We are Getting intermittent Ora error while executing a rest call from SP
[Error] Execution (124: 1): ORA-29273: HTTP request failed
ORA-29276: transfer timeout
ORA-06512: at "SYS.UTL_HTTP", line 1258
ORA-06512: at "EDB.GET_EXPECTED_VALUES_914", line 57
ORA-06512: at line 12
What we tried:
We changed default timeout to:
UTL_HTTP.SET_TRANSFER_TIMEOUT(896000);
It worked for sometime and now we started getting this time_our error again.
The time_out occurs in 1.5 minute that means it does not respect the parameter in UTL_HTTP.SET_TRANSFER_TIMEOUT(896000).
The issue was in the network performance that fluctuated.
UTL_HTTP.SET_TRANSFER_TIMEOUT(896000) - modify default 60 sec timeout
and must be set before initiating rest call, other wise the following notation:
UTL_HTTP.SET_TRANSFER_TIMEOUT(req,896000).
I've oracle 11g R2 11.2.0.4.0 64 bit Standard edition one installed over linux centos 7, it work fine, Oracle Apex 20.2 is also installed and working fine.
I've added on my wallet the certifcate of of https://api.pagos360.com, my problem is the error get when I call the site:
begin
UTL_HTTP.set_wallet('file:/path/to/wallet', '******');
end;
SELECT utl_http.request('https://api.pagos360.com') FROM dual;
When run the select statement I get the next error:
ORA-29273: HTTP request failed
ORA-06512: en "SYS.UTL_HTTP", line 1720 ORA-28860:
Error SSL fatal ORA-06512: line 1
I also trie calling from apex procedure apex_web_service.make_rest_request but get similar error
ORA-29273: HTTP request failed
ORA-06512: "SYS.UTL_HTTP", line 1339
ORA-29261: bad argument
ORA-06512: en "APEX_200200.WWV_FLOW_WEB_SERVICES", line 1156
ORA-29273: HTTP request failed
ORA-06512: en "SYS.UTL_HTTP", line 1130
ORA-28860: Error SSL fatal
ORA-06512: en "APEX_200200.WWV_FLOW_WEB_SERVICES", line 1346
ORA-06512: en "APEX_200200.WWV_FLOW_WEBSERVICES_API", line608
ORA-06512: en line38
I think that the problem is cause of my older oracle 11 version because if I test the same procedure on Oracle 19c it work fine.
Have you idea if there are any patch for this? or another via to solve.
regards
This is the first time I ask question, if there's any place I should improve please tell me, thanks.
Here is my system version :
jdk1.8.0_65
hadoop-2.6.1
hbase-1.0.2
scala-2.11.7
spark-1.5.1
zookeeper-3.4.6
Then there is my question:
I'm gonna built a system that can store data from sensors
I need to store data in it and to analysis the data near
real-time, so I use spark to make my analysis run faster,
but I'm wondering "Do I Really Need Hbase Database" ?
There is some problem when I run Spark:
First I run: hadoop:start-all.sh and Spark:start-all.sh, then I
run Spark:spark-shell
This is what I got:
15 / 12 / 01 22: 16: 47 WARN NativeCodeLoader: Unable to load native - hadoop library
for your platform...using builtin - java classes where applicable Welcome to Using Scala version 2.10 .4(Java HotSpot(TM) 64 - Bit Server VM, Java 1.8 .0 _65)
Type in expressions to have them evaluated.
Type: help
for more information.
15 / 12 / 01 22: 16: 56 WARN MetricsSystem: Using
default name DAGScheduler
for source because spark.app.id is not set.Spark context available as sc.
15 / 12 / 01 22: 16: 59 WARN Connection: BoneCP specified but not present in CLASSPATH(or one of dependencies)
15 / 12 / 01 22: 16: 59 WARN Connection: BoneCP specified but not present in CLASSPATH(or one of dependencies)
15 / 12 / 01 22: 17: 07 WARN ObjectStore: Version information not found in metastore.hive.metastore.schema.verification is not enabled so recording the schema version 1.2 .0
15 / 12 / 01 22: 17: 07 WARN ObjectStore: Failed to get database
default, returning NoSuchObjectException
15 / 12 / 01 22: 17: 10 WARN NativeCodeLoader: Unable to load native - hadoop library
for your platform...using builtin - java classes where applicable
15 / 12 / 01 22: 17: 11 WARN Connection: BoneCP specified but not present in CLASSPATH(or one of dependencies)
15 / 12 / 01 22: 17: 11 WARN Connection: BoneCP specified but not present in CLASSPATH(or one of dependencies)
SQL context available as sqlContext.
scala >
There are so many warning, am I doing the right thing? Like where can
I set spark.app.id or even do I need spark.app.id? And what is "Failed to get database default, returning NoSuchObjectException" ?
Thank's for helping me.
I have the RMAN full database backup of oracle 10g (10.2.0.3) on Sun Solaris OS which I want to restore on oracle 11g (11.2.0.3) on Linux OS. The backup pieces were transferred to the oracle 11g server manually in binary mode. The Oracle 11g is installed on Linux OS. I have only the RMAN backup and no access to the primary database from where the backup has been taken.
-rwxrwxr-x 1 mepc dba 36356096 Jul 16 14:49 snapcf_MEPC.f
-rwxrwxr-x 1 mepc dba 166028800 Jul 16 15:29 MEPC_full_backup_MEPC_nnnbkn9f_1_1
-rwxrwxr-x 1 mepc dba 169567744 Jul 16 15:29 MEPC_full_backup_MEPC_nmnbkn9f_1_1
-rwxrwxr-x 1 mepc dba 164813824 Jul 16 15:39 MEPC_full_backup_MEPC_nonbkn9f_1_1
-rwxrwxr-x 1 mepc dba 144025600 Jul 16 16:06 MEPC_full_backup_MEPC_nqnbkn9f_1_1
-rwxrwxr-x 1 mepc dba 168576512 Jul 16 16:09 MEPC_full_backup_MEPC_npnbkn9f_1_1
-rwxrwxr-x 1 mepc dba 168649216 Jul 16 17:33 MEPC_full_backup_MEPC_o5nbkpvv_1_1
-rwxrwxr-x 1 mepc dba 162847232 Jul 16 17:34 MEPC_full_backup_MEPC_o6nbkpvv_1_1
-rwxrwxr-x 1 mepc dba 167351808 Jul 16 17:35 MEPC_full_backup_MEPC_o7nbkpvv_1_1
-rwxrwxr-x 1 mepc dba 166838272 Jul 16 17:36 MEPC_full_backup_MEPC_o8nbkpvv_1_1
-rwxrwxr-x 1 mepc dba 166876160 Jul 16 17:37 MEPC_full_backup_MEPC_o9nbkpvv_1_1
-rwxrwxr-x 1 mepc dba 327606272 Jul 16 17:54 MEPC_full_backup_MEPC_o4nbknav_1_1
-rwxrwxr-x 1 mepc dba 549658624 Jul 16 18:26 MEPC_full_backup_MEPC_o2nbknav_1_1
-rwxrwxr-x 1 mepc dba 162984448 Jul 16 18:28 MEPC_full_backup_MEPC_oanbkpvv_1_1
-rwxrwxr-x 1 mepc dba 163567616 Jul 16 18:29 MEPC_full_backup_MEPC_obnbkpvv_1_1
-rwxrwxr-x 1 mepc dba 161380352 Jul 16 18:29 MEPC_full_backup_MEPC_ocnbkpvv_1_1
-rwxrwxr-x 1 mepc dba 1072275456 Jul 18 13:52 MEPC_full_backup_MEPC_o3nbknav_1_1
-rwxrwxr-x 1 mepc dba 1813348352 Jul 18 17:00 MEPC_full_backup_MEPC_o1nbknav_1_1
-rwxrwxr-x 1 mepc dba 36438016 Jul 25 15:45 controlfile_bkup_MEPC_c-1469445140-20120522-09
the backup is taken in the above format. I know the ORACLE_SID and did of the database from which the backup has been taken.
whenever I tried the following command
mepc#tcstctmatson:/mepc_backup/May22fullbkp$ rman target /
Recovery Manager: Release 11.2.0.3.0 - Production on Tue Jul 31 12:14:54 2012
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
connected to target database: MEPC (DBID=1595278680)
RMAN> shutdown;
using target database control file instead of recovery catalog
database closed
database dismounted
Oracle instance shut down
RMAN> startup nomount;
connected to target database (not started)
Oracle instance started
Total System Global Area 1071333376 bytes
Fixed Size 1349732 bytes
Variable Size 620758940 bytes
Database Buffers 444596224 bytes
Redo Buffers 4628480 bytes
RMAN> restore spfile to '$ORACLE_HOME/dbs/initMEPC.ora' from autobackup db_recovery_file_dest='/mepc_backup/May22fullbkp' db_name='MEPC';
the following error was notified
Starting restore at 31-JUL-12
using channel ORA_DISK_1
recovery area destination: /mepc_backup/May22fullbkp
database name (or database unique name) used for search: MEPC
channel ORA_DISK_1: no AUTOBACKUPS found in the recovery area
channel ORA_DISK_1: looking for AUTOBACKUP on day: 20120731
channel ORA_DISK_1: looking for AUTOBACKUP on day: 20120730
channel ORA_DISK_1: looking for AUTOBACKUP on day: 20120729
channel ORA_DISK_1: looking for AUTOBACKUP on day: 20120728
channel ORA_DISK_1: looking for AUTOBACKUP on day: 20120727
channel ORA_DISK_1: looking for AUTOBACKUP on day: 20120726
channel ORA_DISK_1: looking for AUTOBACKUP on day: 20120725
channel ORA_DISK_1: no AUTOBACKUP in 7 days found
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 07/31/2012 12:22:06
RMAN-06172: no AUTOBACKUP found or specified handle is not a valid copy or piece
i tried recovering the rman backup through catalog also and the following error was notified
List of Files Which Where Not Cataloged
=======================================
File Name: /mepc_backup/May22fullbkp/MEPC_full_backup_MEPC_nonbkn9f_1_1
RMAN-07517: Reason: The file header is corrupted
File Name: /mepc_backup/May22fullbkp/MEPC_full_backup_MEPC_obnbkpvv_1_1
RMAN-07517: Reason: The file header is corrupted
File Name: /mepc_backup/May22fullbkp/MEPC_full_backup_MEPC_ocnbkpvv_1_1
RMAN-07517: Reason: The file header is corrupted
File Name: /mepc_backup/May22fullbkp/MEPC_full_backup_MEPC_o7nbkpvv_1_1
RMAN-07517: Reason: The file header is corrupted
File Name: /mepc_backup/May22fullbkp/MEPC_full_backup_MEPC_o9nbkpvv_1_1
RMAN-07517: Reason: The file header is corrupted
File Name: /mepc_backup/May22fullbkp/MEPC_full_backup_MEPC_nmnbkn9f_1_1
RMAN-07517: Reason: The file header is corrupted
File Name: /mepc_backup/May22fullbkp/MEPC_full_backup_MEPC_nnnbkn9f_1_1
RMAN-07517: Reason: The file header is corrupted
File Name: /mepc_backup/May22fullbkp/MEPC_full_backup_MEPC_o3nbknav_1_1
RMAN-07517: Reason: The file header is corrupted
File Name: /mepc_backup/May22fullbkp/MEPC_full_backup_MEPC_o6nbkpvv_1_1
RMAN-07517: Reason: The file header is corrupted
File Name: /mepc_backup/May22fullbkp/controlfile_bkup_MEPC_c-1469445140-20120522-09
RMAN-07517: Reason: The file header is corrupted
File Name: /mepc_backup/May22fullbkp/MEPC_full_backup_MEPC_npnbkn9f_1_1
RMAN-07517: Reason: The file header is corrupted
File Name: /mepc_backup/May22fullbkp/MEPC_full_backup_MEPC_oanbkpvv_1_1
RMAN-07517: Reason: The file header is corrupted
File Name: /mepc_backup/May22fullbkp/MEPC_full_backup_MEPC_nqnbkn9f_1_1
RMAN-07517: Reason: The file header is corrupted
File Name: /mepc_backup/May22fullbkp/MEPC_full_backup_MEPC_o1nbknav_1_1
RMAN-07517: Reason: The file header is corrupted
File Name: /mepc_backup/May22fullbkp/MEPC_full_backup_MEPC_o5nbkpvv_1_1
RMAN-07517: Reason: The file header is corrupted
File Name: /mepc_backup/May22fullbkp/snapcf_MEPC.f
RMAN-07517: Reason: The file header is corrupted
File Name: /mepc_backup/May22fullbkp/MEPC_full_backup_MEPC_o2nbknav_1_1
RMAN-07517: Reason: The file header is corrupted
File Name: /mepc_backup/May22fullbkp/MEPC_full_backup_MEPC_o4nbknav_1_1
RMAN-07517: Reason: The file header is corrupted
File Name: /mepc_backup/May22fullbkp/MEPC_full_backup_MEPC_o8nbkpvv_1_1
RMAN-07517: Reason: The file header is corrupted
the file is not corrupted as I checked the checksum on both the servers and it is the same.
Please help me how can I restore the RMAN oracle 10g backup in Oracle 11g and let me know where I am going wrong.
Thanks in advance.
You can not do this.
AFAIK, Solaris - assuming sparc - and Linux - assuming Intel - have different endian formats and this is your problem. You could use the migrate cross platform tablespace scenario.
see Oracle® Database Backup and Recovery Reference
If your endian format does appear to be the same, you should convert the database using rman convert database. In that case you could restore cross platform and cross version.
Not applicable for 10g/11g, but this is different in 12c:
In 12c, rman offers the option following options with the backup command:
FOR TRANSPORT: This options creates a backupset which can be transported to any destination. If the destination database uses a different endian format than the source, the endian format conversion is performed on the destination database.
TO PLATFORM: This option results in the endian format conversion to be performed on the source database and must be used by that supported platform only.
DATAPUMP: This specifies that a data pump export dump file is created while performing a cross-platform backup. The dump file is created in s separate backup set.
Use below command to restore from autobackup, RMAN knows the path and it will restore.
restore controlfile from autobackup;
You need to know first Solaris to Linux are part of same endian or not. I see that Solaris 64 but and Linux 64 bit are same endian.
Even it has same endian you will see this error when " The source Production database, has a 32KB tablespace and an initialisation parameter defined for db_32k_cache_size, but the target pfile didn't have the parameter db_32k_cache_size defined"
Set in the pfile/spfile of the target db_32k_cache_size