ora-00020 maximum number of processes - oracle

Recently, I have been facing this error
ora-00020 maximum number of processes
I have read articles that says that I have to increase the number of processes, sessions, and the transactions.
I tried. and Simply I have done the followings:
I logged in as SYSDBA to the database.
I tried to use the following command:
alter system set processes = 500 scope = psfile
to increase the sessions and the transactions I need to follow the equations below :
sessions = (1.1*processes) + 5
transactions = (1.1*sessions)
well, I am not sure if it's correct or not.
so, I tried to increase the processes first by the command above.
I got another error
[code] ora-02095: specified initialization parameter cannot be modified[/code]
After reading about ora-02095. I found that I have to check the scope if it's spfile or pfile.
SELECT DECODE(value, NULL, 'PFILE', 'SPFILE') "Init File Type"
FROM sys.v_$parameter WHERE name = 'spfile';
no rows selected
can you please help me

The Solution :
Go to database files
go to ADMIN\database\pfile\init.ora
open it
change
processes = 59
to
processes = 200
then shutdown and start the database.
Problem Solved. Thanks for all.

Open cmd as Administrator
C:\Users\Usuario> sqlplus "/as sysdba"
SQL> alter system set processes=500 scope=spfile;
SQL> shutdownimmediate
SQL> startup
SQL> show parameter processes

Related

Lost Redologs and Archivelogs

I am using Oracle XE 11g R2 and due to a mistake all the archivelogs where deleted by running delete archivelog all; command on RMAN.
Also one set of redo logs were deleted i.e. redo_g02a.log, redo_g02b.log and redo_g02c.log
Other redolog are available i.e. redo_g01a.log, redo_g01b.log, redo_g01c.log and redo_g03a.log, redo_g03b.log and redo_g03c.log
Is there a way I can startup the database now? It is a production database and I am really worried.
I tried copying from redo_g01a.log to redo_g02a.log ... but alert logs say:
ORA-00312: online log 2 thread 1: '/u01/app/oracle/fast_recovery_area/XE/onlinelog/redo_g02a.log'
USER (ospid: 30663): terminating the instance due to error 341
Any help will be much much appreciated.
First make a copy of your datafiles, redo logs, and control file. That way you can get back to this point.
If the database was shut down clean you can try clearing the group and it will be recreated for you.
SQL> connect / as sysdba
Connected to an idle instance.
SQL> startup mount;
ORACLE instance started.
Total System Global Area 1068937216 bytes
Fixed Size 2260048 bytes
Variable Size 675283888 bytes
Database Buffers 385875968 bytes
Redo Buffers 5517312 bytes
Database mounted.
SQL> alter database clear logfile group 2;
Database altered.
SQL> alter database open;
Database altered.
SQL>
If not you will need to recover and open with the resetlogs option. Unfortunately because you lost an entire log group you may also have lost data.

Oracle 12c extended to support varchar2 > 4000 bytes doesn't work for user who is not sysdba

On oracle 12c compatible 12.0.0, changed to extended with sysdba privileges.
I can create a table with varchar2(16000) as column now and insert a string > 4000 bytes; but only when connected as sysdba.
When connected as a normal user rather than sysdba, I cannot play with varchar2 >4000 bytes, an error ORA-60019 is thrown. Can anyone explain why?
the param max_string_size= extended and compatible=12.0.0 when logged in as a user who is not a sysdba.
Do following steps and let me know if the issue is resolved. I am asking to set the parameter again just to make sure
everything is in order.
1) Back up your spfile ( get location of spfile)
sqlplus / as sysdba
show parameter spfile;
2) Shut down the database.
sqlplus / as sysdba
shutdown immediate
3) Restart the database in UPGRADE mode.
startup upgrade
4) Change the setting of MAX_STRING_SIZE to EXTENDED.
alter system set MAX_STRING_SIZE ='EXTENDED' scope=spfile;
5)
sqlplus / as sysdba
#%ORACLE_HOME%\RDBMS\ADMIN\utl32k.sql
#%ORACLE_HOME%\RDBMS\ADMIN\utlrp.sql
Note: The utl32k.sql script increases the maximum size of the
VARCHAR2, NVARCHAR2, and RAW columns for the views where this is
required. The script does not increase the maximum size of the
VARCHAR2, NVARCHAR2, and RAW columns in some views because of the way
the SQL for those views is written.
rdbms/admin/utlrp.sql script helps to recompile invalid objects. You
must be connected AS SYSDBA to run the script.
6) Restart the database in NORMAL mode.
sqlplus / as sysdba
shutdown immediate
startup;
show parameter MAX_STRING_SIZE;
7) create new table with column datatype varchar2 having more than 4000 size.
You must change your file "TNSNAMES.ORA" to connect by PDB.
I was with the same problem.
I have solved with the information of link bellow.
https://dba.stackexchange.com/questions/240761/in-oracle-12c-tryiyng-to-create-table-with-columns-greater-than-4000
The reason for that behaviour is that you are in a multi-tenant environment, i.e. a master container called the CDB ("Container Database"), and any number of PDBs ("Pluggable Databases").
The CDB ("container") is a kind of "system" database that is there to contain the actual customer databases ("pluggable databases" or PDBs). The CDB is not intended to receive any customer data whatsoever. Everything goes into one or more PDBs.
When you connect without specifying any service, you are automatically placed in the CDB. The extended strings parameter is ignored for the CDB: the limit remains 4000 bytes. The following connects to the CDB. Creating a table with a long string is rejected, just like in your case:

Query very slow after a few execution

I'm new of oracle and now I'm becoming crazy with the following situation. I'm working on a oracle 11g database and many times is happening that I run a query with sql developer and this is correctly executed in 5/6 seconds, others time instead the same query take 300/400 second to be executed. There is some tools to debug what is happening when the query employs 300/400 second?
Update 1
This is my sql developer screenshot the problem seems be direct path read temp
Update 2
report
Update 3
report2
Any suggestion?
Try setting a trace. User being whatever user is experiencing the delay
As sys:
GRANT ALTER SESSION TO USER;
As the user executing the trace:
ALTER SESSION SET EVENTS '10046 trace name context forever, level 8';
ALTER SESSION SET TRACEFILE_IDENTIFIER = "MY_TEST_SESSION";
Produce the error/issue, then as the user testing:
ALTER SESSION SET EVENTS '10046 trace name context off';
As system find out where the trace files are kept:
show parameter background_dump_dest;
Go to that directory and look for .trc/.trm files containing MY_TEST_SESSION. For example ORCL_ora_29772_MY_TEST_SESSION.trc.
After that tkprof those files. In linux:
tkprof ORCL_ora_29772_MY_TEST_SESSION.trc output=ORCL_ora_29772_MY_TEST_SESSION.tkprof explain=user/password sys=no
Read the tkprof file and it will will show you wait times on given statements.
For more info on TKPROF read this. For more info on enabling/disabling a trace read this.
The best tool is Real-Time SQL Monitoring. It does not require changing code or access to the operating system. The only downside is it requires licensing the Tuning Pack.
Compare this single line of code with the trace steps in the other answer. Also, the output looks much nicer.
select dbms_sqltune.report_sql_monitor(sql_id => 'your sql id', type => 'text') from dual;
There's almost never a need to use trace in 11g and beyond.
This behaviour can be caused by cardinality feedback bugs / issues in 11gR2. I had a similar issue. You can test if this is the case by turning off this feature with _optimizer_use_feedback=false
Also try applying the latest updates.

Oracle: idle_time appears to be ignored

In my understanding, creating a profile with the idle_time set to a certain value (in minutes) and creating a user with this profile should force the SNIPED status for that user's session in case he is idle for longer than idle_time. When the user tries to execute a query after this has happened, he receives a message that he must connect again.
First question: Is that right? If so, read on:
I'm running a test script as follows in sqlplus (without the placeholders obviously):
connect system/<password>#<tns>
CREATE PROFILE test_profile LIMIT idle_time 1;
CREATE USER test_user PROFILE test_profile IDENTIFIED BY test_user;
GRANT CREATE SESSION TO test_user;
GRANT ALTER SESSION TO test_user;
GRANT SELECT ON <schema>.<table> TO test_user;
disconnect;
connect test_user/test_user#<tns>
SELECT * FROM <schema>.<table>;
Everything works up to this point; the sqlplus window is still open. Now I open an additional sqplus window and connect using the system account, running the following query after doing other stuff for a while:
SELECT username, status, seconds_in_wait FROM v$session WHERE username = 'test_user';
I get something like:
USERNAME STATUS SECONDS_IN_WAIT
--------- -------- ---------------
TEST_USER INACTIVE 1166
Why has the status not been set to SNIPED?
Obviously, If I run another query from the test_user's sqlplus window, I do not get a message asking me to reconnect.
You need to set the database's RESOURCE_LIMIT parameter to TRUE in order for resource limits in profiles to take effect. Assuming you use a spfile (otherwise omit the scope = BOTH part)
ALTER SYSTEM SET resource_limit = TRUE scope = BOTH
Once you do that, PMON should start sniping the sessions that have exceeded your IDLE_TIME when it wakes up every few minutes.

Any way to tell if an Oracle export is still running?

I've got an Oracle export that's been running for almost 2 days now.
The export file shows a modification time of 5 am this morning and does not appear to be growing.
The log file does not show any errors.
I think it's stuck and I need to restart it, but I'd hate to cancel it only to find out it was still going and I simply didn't wait long enough.
Any way to definitely know if the export is still running?
select * from v$session where status = 'ACTIVE';
Couple of things:
1) FEEDBACK parameter for export can be helpful to detect activity. I usually set this to 1000 or 10000 records depending on the size of the database I am exporting.
2) You may also be able to take advantage of the v$session_longops to see progress.
The export program doesn't update LongOps directly, but if there are large tables the table scan progress that is a by product of the export will show up, you should at least see the PercentageDone counting up while large tables are being exported:
SELECT ROUND(sofar/totalwork*100,2) as PercentDone,
v$session_longops.*
FROM v$session_longops
WHERE sofar <> totalwork
ORDER BY target, sid;
-Dave
Method 1:
ps -ef|grep expdp
Method 2:
Step-1
Find Export JOB using below query.
select job_name,owner_name,state from dba_datapump_jobs;
JOB_NAME OWNER_NAME STATE
------------------------------ -------------------- --------------------
SYS_EXPORT_FULL_04 SYSTEM NOT RUNNING
EXP_TEST_FULL SYSTEM NOT RUNNING
Step-2 Then using below query
expdp system/password attach=EXP_TEST_FULL
Step-3 Kill the job
Export> KILL_JOB

Resources