Error trying to start Oracle 12c on Solaris 11 - oracle

After the installation of Oracle 12c under Solaris 11 .. Im running SQLPlus and after typing startup i get this error :
SQL> startup
ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file '/u01/app/oracle/product/12.1.0.2/db_1/dbs/initDB12C.ora'
Any ideas guys ? i've tried everything i found on the net
.profile File
ORACLE_HOSTNAME=solaris; export ORACLE_HOSTNAME
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/12.1.0.2/db_1; export ORACLE_HOME
ORACLE_SID=DB12C; export ORACLE_SID
LD_LIBRARY_PATH=$ORACLE_HOME/lib:$CRS_HOME/lib
PATH=$ORACLE_HOME/bin:$PATH; export PATH
init.ora File :
# Change '<ORACLE_BASE>' to point to the oracle base (the one you specify at
# install time)
db_name='ORCL'
memory_target=1G
processes = 150
audit_file_dest='<ORACLE_BASE>/admin/orcl/adump'
audit_trail ='db'
db_block_size=8192
db_domain=''
db_recovery_file_dest='<ORACLE_BASE>/fast_recovery_area'
db_recovery_file_dest_size=2G
diagnostic_dest='<ORACLE_BASE>'
dispatchers='(PROTOCOL=TCP) (SERVICE=ORCLXDB)'
open_cursors=300
remote_login_passwordfile='EXCLUSIVE'
undo_tablespace='UNDOTBS1'
# You may want to ensure that control files are created on separate physical
# devices
control_files = (ora_control1, ora_control2)
compatible ='11.2.0'

Related

Oracle full EXPORT with exclude and NOT using par file

I need to do a full export of a 12.2 database. Recently we placed 2 tables in it with over 4 million records that will remain static. I'd like to eliminate them from the daily EXPDP as they have been archived offline.
This EXPDP is launched via a scheduled task and calls a series of batch files that have defined variables that are passed from batch file to batch file. This produces a series of log and archive files important in the larger scheme of things.
I do this without a .PAR file as the .PAR files does not seem to like any VARIABLE names defined in the batch files.
I can run this at the command prompt without issue, but if I call it via a batch I get an error
**
LRM-00111: no closing quote for value 'table:"LIK'
**
EXPDP *******/********#%dbname% FULL=Y exclude=statistics exclude=table:\"LIKE\'%_80\'\" DUMPFILE=%bckupdate%.dmp LOGFILE=%bckupdate%.log reuse_dumpfiles=yes
Any helpful hints on how to either use a variable name (as in %DBNAME%) in the PAR file or proper formatting for the batch file would be appreciated.
You can try this script expdp_powershell.ps1
For example
E:\upwork\stackoverflow\expdp_powershell>powershell ./expdp_powershell.ps1 -user_name system -user_password manager -connect_string test -exclude table:\"LIKE\'%_80\'\"
or
E:\upwork\stackoverflow\expdp_powershell>powershell ./expdp_powershell.ps1
Script expdp_powershell.ps1
param(
[string]$user_name = "system"
,
[string]$user_password = "manager"
,
[string]$connect_string = "TEST"
,
[string]$export_mode = "FULL=Y"
,
[string]$exclude = "table:\""LIKE \'%_80\'\"""
)
$date_time_log = Get-Date -Format "yyyyMMddHHmmss"
$DUMPFILE = "backup" + $date_time_log + ".dmp"
$LOGFILE = "backup_log" + $date_time_log + ".log"
$reuse_dumpfiles = "yes"
$DIRECTORY="DATA_PUMP_DIR"
echo $exclude
EXPDP $user_name/$user_password#$connect_string $export_mode exclude=statistics exclude=$exclude DIRECTORY=$DIRECTORY DUMPFILE=$DUMPFILE LOGFILE=$LOGFILE reuse_dumpfiles=$reuse_dumpfiles
For example output
E:\upwork\stackoverflow\expdp_powershell>powershell ./expdp_powershell.ps1 -user_name system -user_password manager -connect_string test -exclude table:\"LIKE\'%_80\'\"
table:\"LIKE \'%_80\'\"
Export: Release 11.2.0.4.0 - Production on Sat Jan 9 12:44:10 2021
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_FULL_01": system/********#TEST FULL=Y exclude=statistics exclude=table:"LIKE \'%_80\'" DIRECTORY=DATA_PUMP_DIR DUMPFILE=backup20210109124410.dmp LOGFILE=ba
ckup_log20210109124410.log reuse_dumpfiles=yes
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 363.1 MB
Processing object type DATABASE_EXPORT/TABLESPACE
Processing object type DATABASE_EXPORT/PROFILE
Processing object type DATABASE_EXPORT/SYS_USER/USER
Processing object type DATABASE_EXPORT/SCHEMA/USER
Processing object type DATABASE_EXPORT/ROLE
Processing object type DATABASE_EXPORT/GRANT/SYSTEM_GRANT/PROC_SYSTEM_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/GRANT/SYSTEM_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/ROLE_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/DEFAULT_ROLE
Processing object type DATABASE_EXPORT/SCHEMA/TABLESPACE_QUOTA
Processing object type DATABASE_EXPORT/RESOURCE_COST
Processing object type DATABASE_EXPORT/TRUSTED_DB_LINK
Processing object type DATABASE_EXPORT/SCHEMA/SEQUENCE/SEQUENCE

Error 'Packet for query is too large' when I tried to make a query on my website

Again I need your help.
I'm trying to put my java web site online.
What I use :
MySQL server : command line mysql -V, result : mysql Ver 15.1 Distrib 10.1.23-MariaDB, for debian-linux-gnu (x86_64) using readline 5.2
Cayenne
Debian server
Java (Vaadin)
Packet for query is too large (4739923 > 1048576). You can change this
value on the server by setting the max_allowed_packet' variable.
What I tried :
1. Like the error said, I tried to change the value on the server by doing :
Log on my server
Connect to MySQL with : mysql -u root
Enter : SET GLOBAL max_allowed_packet=1073741824;
then, restart the server with : /etc/init.d/mysql restart
But I still have the error.
2. I took a look to : How to change max_allowed_packet size
But, When I did the nano /etc/mysql/my.cnf, the file looks like (I don't have any [mysql]) :
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Fri Nov 10 23:57:02 2017 from 82.236.220.195
root#XXXX:~# nano /etc/mysql/my.cnf
GNU nano 2.7.4 File: /etc/mysql/my.cnf Modified
# One can use all long options that the program supports.
# Run program with --help to get a list of available options and with
# --print-defaults to see which it would actually understand and use.
#
# This group is read both both by the client and the server
# use it for options that affect everything
#
[client-server]
# The MariaDB configuration file
#
# The MariaDB/MySQL tools read configuration files in the following order:
# 1. "/etc/mysql/mariadb.cnf" (this file) to set global defaults,
# 2. "/etc/mysql/conf.d/*.cnf" to set global options.
# 3. "/etc/mysql/mariadb.conf.d/*.cnf" to set MariaDB-only options.
# 4. "~/.my.cnf" to set user-specific options.
#
# If the same option is defined multiple times, the last one will apply.
#
# One can use all long options that the program supports.
# Run program with --help to get a list of available options and with
# --print-defaults to see which it would actually understand and use.
#
# This group is read both both by the client and the server
# use it for options that affect everything
#
[client-server]
# Import all .cnf files from configuration directory
!includedir /etc/mysql/conf.d/
!includedir /etc/mysql/mariadb.conf.d/
In mysql, the folders/files in the 'mysql' folder is :
Any hint will be very appreciate!
Thanks
EDIT: In /etc/mysql/mariadb.conf.d/50-server.cnf, I changed :
max_allowed_packet = 1073741824
max_connections = 100000
and I added : net_buffer_length = 1048576
For info :
In my workbench, I can see the server variables :
EDIT2 : Now, when I select the variable in command line on the server, I have :
MariaDB [(none)]> SELECT ##global.max_allowed_packet;
+-----------------------------+
| ##global.max_allowed_packet |
+-----------------------------+
| 1073741824 |
+-----------------------------+
1 row in set (0.00 sec)
SOLUTION Because the error was not explicit.
Thanks to com.mysql.jdbc.PacketTooBigException
My cayenne configuration was :
<url value="jdbc:mysql://IPADDRESS:22/DBBASENAME" />
<login userName="ServerUserName" password="ServerPassword" />
But it should be :
<url value="jdbc:mysql://IPADDRESS/DBBASENAME" />
<login userName="MYSQLUserName" password="MYSQLPassword" />
Change it in my.cnf, then restart mysqld.
Better yet, put it in a file under /etc/mysql/mariadb.conf.d/, and specify the section:
[mysqld]
max_allowed_packet = 1073741824
What you did (SET) went away when you restarted. Even so, it only applied to connections that logged in after doing the SET.

ORA-27369: job of type EXECUTABLE failed with exit code: 274668

all
I am trying to call a shell script from PL/SQL. I created a program and a job and enabled them.
exec DBMS_SCHEDULER.CREATE_JOB(job_name=>'oam.loadLog_job',job_type=>'EXECUTABLE',job_action=>'/gx_working/select2.sh');
When I tried run the code above I got the following error;
ORA-27369: job of type EXECUTABLE failed with exit code: 274668
I searched the internet, the exit_code 274668 is "invalid run_group specified in externaljob.ora file"
So, What do I need to do to solve this problem?
Thanks.
my externaljob.ora file is like that:
# $Header: externaljob.ora 16-dec-2005.20:47:13 rramkiss Exp $
#
# Copyright (c) 2005, Oracle. All rights reserved.
# NAME
# externaljob.ora
# FUNCTION
# This configuration file is used by dbms_scheduler when executing external
# (operating system) jobs. It contains the user and group to run external
# jobs as. It must only be writable by the owner and must be owned by root.
# If extjob is not setuid then the only allowable run_user
# is the user Oracle runs as and the only allowable run_group is the group
# Oracle runs as.
#
# NOTES
# For Porters: The user and group specified here should be a lowly privileged
# user and group for your platform. For Linux this is nobody
# and nobody.
# MODIFIED
# rramkiss 12/09/05 - Creation
#
##############################################################################
# External job execution configuration file externaljob.ora
#
# This file is provided by Oracle Corporation to help you customize
# your RDBMS installation for your site. Important system parameters
# are discussed, and default settings given.
#
# This configuration file is used by dbms_scheduler when executing external
# (operating system) jobs. It contains the user and group to run external
# jobs as. It must only be writable by the owner and must be owned by root.
# If extjob is not setuid then the only allowable run_user
# is the user Oracle runs as and the only allowable run_group is the group
# Oracle runs as.
run_user = nobody
run_group = nobody

sqlplus Dynamic Spool File Name

I need to give the spool file name dynamically and I have to pass the parameters when I call sqlplus. Below is what I tried
echo exit | sqlplus "{{ Oracle_username }}/ {{ Oracle_pwd}} #(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(Host={{ Oracle_HostName }} )(Port=1521))(CONNECT_DATA=(SID= {{Oracle_SID }})))" #Script.sql 'AppName' 'DatabaseName' 'ObjectType'
Over here I tried to pass App Name, Database Name and Object Type dynamically. Prior to running SQLPLUS step, I create folders dynamically (App Name , Database Name , Object Type are all folders and it will vary depending on each application) .Below is how my script.sql looks like :
SPOOL &&AppName/&&DatabaseName/&&ObjectType/Output.csv
<<SQL Script>>
SPOOL OFF
This doenst work . Can someone tell me what needs to be changed.
You are passing the values you want to form your spool file path and name as arguments to your script, but you need to refer to them as positional parameters:
SPOOL &1/&2/&3/Output.csv
Or if you're going to reuse them for something else you could define your own variable, set from the positional parameters:
DEFINE AppName=&1
DEFINE DatabaseName=&2
DEFINE ObjectType=&3
SPOOL &&AppName/&&DatabaseName/&&ObjectType/Output.csv
The spool file path will be relative to the directory you're in when you run the script. If that isn't what you want then put the root before the first substitution variable in the spool command, whichever form you use.
You could also include the exit in your .sql file so you don't have to echo it in; and you could use a TNS alias instead of passing all of the connection information on the command line - or if you can use a service name instead of a SID, you could use the easy connect syntax which is a bit simpler:
sqlplus username/password#//hostname:1521/service_name #Script.sql 'AppName' 'DatabaseName' 'ObjectType'
set your appname,dbname,objecttype's as environmental variables and then Try like below
[oracle#ct-myhost-02 ~]$ export app_name=/stage
[oracle#ct-myhost-02 ~]$ export database_name=PSES
[oracle#ct-myhost-02 ~]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.3.0 Production on Wed Feb 1 12:04:08 2017
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> spool $app_name/$database_name/out.csv
SQL> select * from dual;
D
-
X
SQL> spool off;
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
[oracle#ct-myhost-02 ~]$ ls -l /stage/PSES/out.csv
-rw-r-----. 1 oracle oinstall 286 Feb 1 12:04 /stage/PSES/out.csv

Why do I get ORA-39001: invalid argument value when I try to impdp in Oracle 12c?

When I run this command in Oracle 12c SE2:
impdp system/Oracle_1#pdborcl directory=DATA_PUMP_DIR dumpfile=mydb.dmp nologfile=Y
I get this:
ORA-39001 : invalid argument value
ORA-39000 : bad dump file specification
ORA-39088 : directory name DATA_PUMP_DIR is invalid
We used to import this into 11g all the time.
How can I solve these errors?
From the 12c documentation:
Be aware of the following requirements when using Data Pump to move data into a CDB:
...
The default Data Pump directory object, DATA_PUMP_DIR, does not work with PDBs. You must define an explicit directory object within the PDB that you are exporting or importing.
You will need to define your own directory object in your PDB, which your user (system here) has read/write privileges against.
create directory my_data_pump_dir as 'C:\app\OracleHomeUser1\admin\orcl\dpdump';
grant read, write on directory my_data_pump_dir to system;
It can be the same operating system directory that DATA_PUMP_DIR points to, you just need a separate directory object. But I've used the path you said you'd prefer, from a comment on a previous question.
Then the import is modified to have:
... DIRECTORY=my_data_pump_dir DUMPFILE=mydb.dmp

Resources