alter system flush shared_pool oracle - oracle

I've two questions
First, it's substantial the difference between execute the ALTER SYSTEM FLUSH SHARED_POOL command in the server and in one client? In my company they teach me that I've to execute directly in the server that command, but I think is just a command that goes for the network and just a message that is flushed I think that shouldn't be substantial differente how it happens with lot of data, I'm talking of a system that take about 5 minutes approximately to flush
The second, how can I flush a instance from another instance?

ALTER SYSTEM FLUSH SHARED_POOL; can be run from either the client or the server, it doesn't matter.
Many DBAs will run the command from the server, for two reasons. First, many DBAs run all commands from the server, usually because they never learned the importance of an IDE. Second, the command ALTER SYSTEM FLUSH SHARED_POOL; only affects one instance in a clustered database. Connecting directly to the server is usually an easy way of ensuring you connect to each database instance of a cluster.
But you can easily flush the shared pool from all instances without directly connecting to each instance, using the below code. (Thanks to berxblog for this idea.)
--Assumes you have elevated privileges, like DBA role or ALTER SYSTEM privilege.
create or replace function flush_shared_pool return varchar2 authid current_user as
begin
execute immediate 'alter system flush shared_pool';
return 'Done';
end;
/
select *
from table(gv$(cursor(
select instance_number, flush_shared_pool from v$instance
)));
INSTANCE_NUMBER FLUSH_SHARED_POOL
--------------- -----------------
1 Done
3 Done
2 Done
I partially disagree with #sstan - flushing the shared pool should be rare in production, but it may be relatively common in development. Flushing the shared pool and buffer cache can help imitate running queries "cold".

Related

Auto Optimizer Stats Collection Job Causing Oracle RDS Database to Restart

We have a Oracle 19C database (19.0.0.0.ru-2021-04.rur-2021-04.r1) on AWS RDS which is hosted on an 4 CPU 32 GB RAM instance. The size of the database is not big (35 GB) and the PGA Aggregate Limit is 8GB & Target is 4GB. Whenever the scheduled internal Oracle Auto Optimizer Stats Collection Job (ORA$AT_OS_OPT_SY_nnn) runs then it consumes substantially high PGA memory (approx 7GB) and sometimes this makes database unstable and AWS loses communication with the RDS instance so it restarts the database.
We thought this may be linked to existing Oracle bug 30846782 (19C+: Fast/Excessive PGA growth when using DBMS_STATS.GATHER_TABLE_STATS) but Oracle & AWS had fixed it in the current 19C version we are using. There are no application level operations that consume this much PGA and the database restart have always happened when the Auto Optimizer Stats Collection Job was running. There are couple of more databases, which are on same version, where same pattern was observed and the database was restarted by AWS. We have disabled the job now on those databases to avoid further occurrence of this issue however we want to run this job as disabling it may cause old stats being available in the database.
Any pointers on how to tackle this issue?
I found the same issue in my AWS RDS Oracle 18c and 19c instances, even though I am not in the same patch level as you.
In my case, I applied this workaround and it worked.
SQL> alter system set "_fix_control"='20424684:OFF' scope=both;
However, before applying this change, I strongly suggest that you test it on your non production environments, and if you can, try to consult with Oracle Support. Dealing with hidden parameters might lead to unexpected side effects, so apply it at your own risk.
Instead of completely abandoning automatic statistics gathering, try find any specific objects that are causing the problem. If only a small number of tables are responsible for a large amount of statistics gathering, you can manually analyze those tables or change their preferences.
First, use the below SQL to see which objects are causing the most statistics gathering. According to the test case in bug 30846782, the problem seems to be only related to the number of times DBMS_STATS is called.
select *
from dba_optstat_operations
order by start_time desc;
In addition, you may be able to find specific SQL statements or sessions that generate a lot of PGA memory with the below query. (However, if the database restarts, it's possible that AWR won't save the recorded values.)
select username, event, sql_id, pga_allocated/1024/1024/1024 pga_allocated_gb, gv$active_session_history.*
from gv$active_session_history
join dba_users on gv$active_session_history.user_id = dba_users.user_id
where pga_allocated/1024/1024/1024 >= 1
order by sample_time desc;
If the problem is only related to a small number of tables with a large number of partitions, you can manually gather the stats on just that table in a separate session. Once the stats are gathered, the table won't be analyzed again until about 10% of the data is changed.
begin
dbms_stats.gather_table_stats(user, 'PGA_STATS_TEST');
end;
/
It's not uncommon for a database to spend a long time gathering statistics, but it is uncommon for a database to constantly analyze thousands of objects. Running into this bug implies there is something unusual about your database - are you constantly dropping and creating objects, or do you have a large number of objects that have 10% of their data modified every day? You may need to add a manual gather step to a few of your processes.
Turning off the automatic statistics job entirely will eventually cause many performance problems. Even if you can't add manual gathering steps, you may still want to keep the job enabled. For example, if tables are being analyzed too frequently, you may want to increase the table preference for the "STALE_PERCENT" threshold from 10% to 20%:
begin
dbms_stats.set_table_prefs
(
ownname => user,
tabname => 'PGA_STATS_TEST',
pname => 'STALE_PERCENT',
pvalue => '20'
);
end;
/

Designing monitoring of jobs on Oracle database

I have several Oracle databases where my in-house applications are running. Those applications use both dba_jobs and dba_scheduler_jobs.
I want to write monitoring function: check_my_jobs which will be called periodically by Nagios to check if everything is OK with my jobs. (Are they running? Is it Broken? Is next_run_date delayed? and so on)
Solutions: Due to the fact that I have to monitor jobs on different databases there is two way of implementing solution:
Create a monitoring function and configuration tables only on one database which will check jobs on every database using database links.
pros: Centralized functionality, easy to maintain.
cons: I have to do the checks using database links.
Create a monitoring function and configuration tables on every database where I want to check jobs.
pros: I don't have to use DB links
cons: Duplicated monitoring code on every database
Which solution is better?
I'd go with option #1 - centralized functionality that uses database links.
Database links have an undeserved bad reputation. One of the main reasons is that too many people use public database links, where anyone connecting to the database can use the link. That's obviously a security nightmare, but it's not the default setting and it's easy to avoid that trap.
Some other issues with database links:
They don't perform well for huge inserts of millions of rows. On the other hand they're great at many small SELECTs or INSERTs. I frequently have hundreds of links open and fetching data concurrently, on 10 year-old hardware, and it works great.
They make execution plans more difficult to troubleshoot.
Not all data types are natively supported. This is better in 12.2, but in earlier versions you will need to use an INSERT to move data types like CLOB into tables, and then read from those tables.
For DDL you'll need to use DBMS_UTILITY.EXEC_DDL_STATEMENT#LINK_NAME('create ...'); Make sure to only use DDL in there. Other types of commands will silently fail.
Links may hang indefinitely in a few rare situations, like if the database has an archiver error or a guaranteed restore point that's full. (This one is really a blessing in disguise - many tools like Oracle Enterprise Manager will not catch those issues. You may want to have a background job checking for database link queries that have been running longer than X minutes.)
Links should not be hard-coded or else they could invalidate the package. But this may not matter - you'll probably want to loop through the list of databases and use dynamic SQL anyway. And if the link doesn't exist it's pretty easy to create a new one. Here's an example:
declare
v_result varchar2(4000);
begin
--Loop through a configuration table of links.
for links in
(
select database_name, db_link
from dbs_to_monitor
left join user_db_links
on dbs_to_monitor.database_name = user_db_links.db_link
order by database_name
) loop
--Run the query if the link exists.
if links.db_link is not null then
begin
--Note the user of REPLACE and the alternative quoting mechanism, q'[...]';
--This looks a bit silly with this small example, but in a real-life query
--it avoids concatenation hell and makes the query much easier to read.
execute immediate replace(q'[
select dummy from dual##DB_LINK#
]',
'#DB_LINK#', links.db_link)
into v_result;
dbms_output.put_line('Result: '||v_result);
--Catch errors if the links are broken or some other error happens.
exception when others then
dbms_output.put_line('Error with '||links.db_link||': '||sqlerrm);
end;
--Error if the link was not created.
--You will have to run:
--create database link LINK_NAME connect to USERNAME identified by "PASSWORD" using 'TNS_STRING';
else
dbms_output.put_line('ERROR - '||links.db_link||' does not exist!');
end if;
end loop;
end;
/
Despite all of that, database links are great because you can do everything in PL/SQL, on one database. In a single language you can create an agentless monitoring solution and don't have to worry about installing and fixing agents.
As an example, I built the open source program Method5 to do everything using database links. With that program installed you could gather results from hundreds of databases as simply as running select * from table(m5('select * from dba_jobs'));. That program is probably overkill for your scenario but it shows that database links are all you need for a full monitoring solution.

Lost Redologs and Archivelogs

I am using Oracle XE 11g R2 and due to a mistake all the archivelogs where deleted by running delete archivelog all; command on RMAN.
Also one set of redo logs were deleted i.e. redo_g02a.log, redo_g02b.log and redo_g02c.log
Other redolog are available i.e. redo_g01a.log, redo_g01b.log, redo_g01c.log and redo_g03a.log, redo_g03b.log and redo_g03c.log
Is there a way I can startup the database now? It is a production database and I am really worried.
I tried copying from redo_g01a.log to redo_g02a.log ... but alert logs say:
ORA-00312: online log 2 thread 1: '/u01/app/oracle/fast_recovery_area/XE/onlinelog/redo_g02a.log'
USER (ospid: 30663): terminating the instance due to error 341
Any help will be much much appreciated.
First make a copy of your datafiles, redo logs, and control file. That way you can get back to this point.
If the database was shut down clean you can try clearing the group and it will be recreated for you.
SQL> connect / as sysdba
Connected to an idle instance.
SQL> startup mount;
ORACLE instance started.
Total System Global Area 1068937216 bytes
Fixed Size 2260048 bytes
Variable Size 675283888 bytes
Database Buffers 385875968 bytes
Redo Buffers 5517312 bytes
Database mounted.
SQL> alter database clear logfile group 2;
Database altered.
SQL> alter database open;
Database altered.
SQL>
If not you will need to recover and open with the resetlogs option. Unfortunately because you lost an entire log group you may also have lost data.

How can I close Oracle DbLinks in JDBC with XA datasources and transactions to avoid ORA-02020 errors?

I have a JDBC-based application which uses XA datasources and transactions which span multiple connections, connected to an Oracle database. The app sometimes needs to make some queries using join with a table from another (Oracle) server using a shared DbLink. The request works if I don't do it too often, but after 4 or 5 requests in rapid succession I get an error (ORA-02020 - too many links in use). I did some research, and the suggested remedy is to call "ALTER SESSION CLOSE DATABASE LINK ". If I call this request after the query that joins the DbLnk table, I get the error ORA-2080 (link is in use). If I call it before the query, I get ORA-2081 (link closed). Does this call do any good at all? The JDBC connection is closed long before the transaction commit (which is managed either by servlet or by EJB container, depending on the circumstances). I get the impression that when the connection closes, Oracle marks the link as closed, but it takes a minute or two for it to return to the pool of available links. I understand I could enlarge the pool of links (using the open_links property in the config file), but that won't guarantee that I won't have the same problem under a heavier load. Is there something I can do differently to get the dblinks to close more rapidly?
Any distributed SQL, even a select, will open a transaction that must be closed before you can close the database link. You need to either rollback or commit before you call ALTER SESSION CLOSE DATABASE LINK.
But it sounds like you've already got something else handling your transactions. If it's not possible to manually rollback or commit, you should try to increase the number of open links. The OPEN_LINKS parameter is the maximum number of links per session. The number of links you need isn't really dependent on the load, it should be based on the maximum number of distinct remote databases.
Edit:
The situation you describe in your comment shouldn't happen. I don't understand enough about your system to know what's really happening with the transactions. Anyway, if you can't figure out exactly what the system is doing maybe you can replace "alter session close database link" with a procedure like this:
create or replace procedure rollback_and_close_db_links authid current_user is
begin
rollback;
for links in (select db_link from v$dblink) loop
execute immediate 'alter session close database link '||links.db_link;
end loop;
end;
/
You'll probably need this grant:
grant select on v_$dblink to [relevant user];

oracle display for every stored procedure the execution time

I'm working on a stored procedure. Inside this one, there are many call to the other stored procedures. There are a bunch of them.
I was wondering if there is a option to be able to have the execution time of every stored procedure involved, every function (with a start and end time, ior something like that).
The idea is that I need to optimise it and I should touch every part, and since I not sure where is the longest execution time, is a bit difficult. And after a modification I would like the see the hole process if it's shorter or not.
If I call the procedure from unix, using sql plus, I have no log.
If I call it from TOAD, it's blocked until the end.
Any idea?
I'm not a dba, so I don't have many rights on the database, I'm just a regular user.
If you are using Oracle 11g you should check out the built-in Hierarchical Profiler. It does pretty much exactly what you're proposing to do. Unfortunately rights on DBMS_HPROF are not granted to PUBLIC by default, so you'll need to ask your DBA to grant you EXECUTE privilege. As it's to help you with tuning I'm sure they be only too happy to comply.
I have seen logging procedure that was transaction independent (PRAGMA AUTONOMOUS_TRANSACTION;) and was called from main procedure. It saved in funtime_log table:
current time (wall clock),
sequential number,
thread (session) id,
and text (eg. name of procedure)
This way you can select all events from one session ordered by sequential number and see where the time differs most. In production environment you can simply make this function do nothing to disable logging.

Resources