Environment:
Oracle 12.2 64-bit EE under Linux.
SGA_TARGET and STREAMS_POOL_SIZE are both equal to 0.
SGA_MAX_SIZE = 180G.
If I don't trust amount of memory, automatically allocated by Oracle internal system process, what value can I confidently assign manually to
STREAMS_POOL_SIZE parameter without fear for database performance?
Related
We have a Oracle 19C database (19.0.0.0.ru-2021-04.rur-2021-04.r1) on AWS RDS which is hosted on an 4 CPU 32 GB RAM instance. The size of the database is not big (35 GB) and the PGA Aggregate Limit is 8GB & Target is 4GB. Whenever the scheduled internal Oracle Auto Optimizer Stats Collection Job (ORA$AT_OS_OPT_SY_nnn) runs then it consumes substantially high PGA memory (approx 7GB) and sometimes this makes database unstable and AWS loses communication with the RDS instance so it restarts the database.
We thought this may be linked to existing Oracle bug 30846782 (19C+: Fast/Excessive PGA growth when using DBMS_STATS.GATHER_TABLE_STATS) but Oracle & AWS had fixed it in the current 19C version we are using. There are no application level operations that consume this much PGA and the database restart have always happened when the Auto Optimizer Stats Collection Job was running. There are couple of more databases, which are on same version, where same pattern was observed and the database was restarted by AWS. We have disabled the job now on those databases to avoid further occurrence of this issue however we want to run this job as disabling it may cause old stats being available in the database.
Any pointers on how to tackle this issue?
I found the same issue in my AWS RDS Oracle 18c and 19c instances, even though I am not in the same patch level as you.
In my case, I applied this workaround and it worked.
SQL> alter system set "_fix_control"='20424684:OFF' scope=both;
However, before applying this change, I strongly suggest that you test it on your non production environments, and if you can, try to consult with Oracle Support. Dealing with hidden parameters might lead to unexpected side effects, so apply it at your own risk.
Instead of completely abandoning automatic statistics gathering, try find any specific objects that are causing the problem. If only a small number of tables are responsible for a large amount of statistics gathering, you can manually analyze those tables or change their preferences.
First, use the below SQL to see which objects are causing the most statistics gathering. According to the test case in bug 30846782, the problem seems to be only related to the number of times DBMS_STATS is called.
select *
from dba_optstat_operations
order by start_time desc;
In addition, you may be able to find specific SQL statements or sessions that generate a lot of PGA memory with the below query. (However, if the database restarts, it's possible that AWR won't save the recorded values.)
select username, event, sql_id, pga_allocated/1024/1024/1024 pga_allocated_gb, gv$active_session_history.*
from gv$active_session_history
join dba_users on gv$active_session_history.user_id = dba_users.user_id
where pga_allocated/1024/1024/1024 >= 1
order by sample_time desc;
If the problem is only related to a small number of tables with a large number of partitions, you can manually gather the stats on just that table in a separate session. Once the stats are gathered, the table won't be analyzed again until about 10% of the data is changed.
begin
dbms_stats.gather_table_stats(user, 'PGA_STATS_TEST');
end;
/
It's not uncommon for a database to spend a long time gathering statistics, but it is uncommon for a database to constantly analyze thousands of objects. Running into this bug implies there is something unusual about your database - are you constantly dropping and creating objects, or do you have a large number of objects that have 10% of their data modified every day? You may need to add a manual gather step to a few of your processes.
Turning off the automatic statistics job entirely will eventually cause many performance problems. Even if you can't add manual gathering steps, you may still want to keep the job enabled. For example, if tables are being analyzed too frequently, you may want to increase the table preference for the "STALE_PERCENT" threshold from 10% to 20%:
begin
dbms_stats.set_table_prefs
(
ownname => user,
tabname => 'PGA_STATS_TEST',
pname => 'STALE_PERCENT',
pvalue => '20'
);
end;
/
We have an application that uses ODBC to interact with an Oracle database. It has been in use for many years starting with Oracle 9. We are now moving from Oracle 11g to Oracle 19c.
On Windows 10 using an ODBC connection from the Oracle 19c, we see fewer records retrieved for a query than when we use the same query and a connection from Oracle 11g.
On Windows 7, using either connection works and brings back the expected number of records.
I also tried using the same ODBC connection for Oracle 19c in Excel on Windows 10 and with the same query retrieved the correct number of records.
From our application log
2020-Nov-03 09:54:27.579 DEBUG - CODBCDynamic::CreatePreparedStatement: 'SELECT ****'. [query truncated for posting]
2020-Nov-03 09:54:27.588 DEBUG - CODBCDynamic::BindArgument: index = 1, value = ****.
2020-Nov-03 09:54:27.679 DEBUG - CODBCDynamic::ExecuteQuery: 0 records returned.
There should have been one record returned.
Any clues, ideas would be greatly appreciated
Some new info:
We've created simplified versions: one 32 bit app (like the original) and one 64 bit. The 64 bit version works OK bringing back all the records. The 32 bit one doesn't bring back all the records (like the original).
Also, I forgot to note above that we are using the 32 bit client.
I want to load about 2 million rows from CSV formatted file to database and run some SQL statement for analysis, and then remove the data. File size is 2GB in size. Data is web server log message.
Did some research and found H2 in-memory database seems to be faster, since its keep the data in memory. When I try to load the data got OutOfMemory error message because of 32 bit java. Planning to try with 64 bit java.
I am looking for all the optimization option to load the quickly and run the SQL.
test.sql
CREATE TABLE temptable (
f1 varchar(250) NOT NULL DEFAULT '',
f2 varchar(250) NOT NULL DEFAULT '',
f3 reponsetime NOT NULL DEFAULT ''
) as select * from CSVREAD('log.csv');
Running like this in 64 bit java:
java -Xms256m -Xmx4096m -cp h2*.jar org.h2.tools.RunScript -url 'jdbc:h2:mem:test;LOG=0;CACHE_SIZE=65536;LOCK_MODE=0;UNDO_LOG=0' -script test.sql
If any other database available to use in AIX please let me know.
thanks
If the CSV file is 2 GB, then it will need more than 4 GB of heap memory when using a pure in-memory database. The exact memory requirements depend a lot on how redundant the data is. If the same values appear over and over again, then the database will need less memory as common objects are re-used (no matter if it's a string, long, timestamp,...).
Please note the LOCK_MODE=0, UNDO_LOG=0, and LOG=0 are not needed when using create table as select. In addition, the CACHE_SIZE does not help when using the mem: prefix (but it helps for in-memory file systems).
I suggest to try using the in-memory file system first (memFS: instead of mem:), which is slightly slower than mem:, but needs less memory usually:
jdbc:h2:memFS:test;CACHE_SIZE=65536
If this is not enough, try the compressed in-memory mode (memLZF:), which is again slower but uses even less memory:
jdbc:h2:memLZF:test;CACHE_SIZE=65536
If this is still not enough, I suggest to try the regular persistent mode and see how fast this is:
jdbc:h2:~/data/test;CACHE_SIZE=65536
I have to change the character set from AL32UTF8 to WE8MSWIN1252 in a Oracle 11g r2 Express instance... I tried to use the command:
ALTER DATABASE CHARACTER SET WE8MSWIN1252;
But it fails saying that MSWIN1252 isn't a superset of AL32UTF8. Then I found some articles talking about CSSCAN, and that tool doesn't seem to be available in Oracle 11 Express.
http://www.oracle-base.com/articles/10g/CharacterSetMigration.php
Anyone has an idea on how to do that? Thanks in advance
Edit
Clarifying a little bit: The real issue is that I'm trying to import data into a table that has a column defined as VARCHAR(6 byte). The string causing the issue is 'eq.mês', it needs 6 bytes in MSWIN1252 and 7 bytes in UT8
You can't.
The Express Edition of 11g is only available using a UTF-8 character set. If you want to go back to the express edition of 10g, there was a Western European version that used the Windows-1252 character set. Unlike with the other editions, Oracle doesn't support the full range of character sets in the Express Edition nor does it support changing the character set of an existing XE database.
Why do you believe you need to change the database character set? Other than potentially taking a bit more storage space to support the characters in the upper half of the Windows-1252 range, which generally aren't particularly heavily used, there aren't many downsides to a UTF-8 database.
I would say that your best option when you want to go to a character set that supports only a subset of the original characters, that your best option is to use exp and imp back (or expdp and impdp).
Are you sure that no single table will contain any character not found in the 1252 code page ?
The problem of only execute that ALTER DATABASE command is that the Data Dictionary was not converted and it can be corrupted.
I had the same problem. In my case, we are using a Oracle 11g Express Edition (11.2.0.2.0) and we really need that it runs on WE8MSWIN1252 character set, but I cannot change the character set on installation (it always install with AL32UTF8).
With a Oracle Client 11g installed as Administrator and run only the csscan full=y (check this link https://oracle-base.com/articles/10g/character-set-migration) and we notice that are lossy and convertible data problems in our database. But, the problems are with the MDSYS (Oracle Spatial) and APEX_040000 (Oracle Application Express) schemas. So, as we dont need this products, we remove them (check this link: http://fast-dba.blogspot.com.br/2014/04/how-to-remove-unwanted-components-from.html).
Then, we export with expdp the users schemas and drop the users (they must be recreated at the end of the process).
Executing csscan again with full=y capture=y, it reports that: The data dictionary can be safely migrated using the CSALTER script. If the report doesnt have this, the csalter.plb script will not work, because there are some conditions that will not be satisfied:
changeless for all CHAR VARCHAR2, and LONG data (Data Dictionary and Application Data)
changeless for all Application Data CLOB
changeless and/or convertible for all Data Dictionary CLOB
In our case, this conditions were satisfied and we could ran the CSALTER script successfully. Moreover, this script executes the ALTER DATABASE command you are trying to run and it converts the CLOB data of Data Dictionary that is convertible.
Finally, we create the users and the tablespaces of our application and we import the dump of the user data successfully.
Perhaps this is normal, but in my Oracle 11g database I am seeing programmers using Oracle's SQL Developer regularly consume more than 100MB of combined UGA and PGA memory. I'd like to know if this is normal and what can be done about it. Our database is on the 32 bit version of Windows 2008, so memory limitations are becoming an increasing concern. I am using the following query to show the memory usage:
SELECT e.SID, e.username, e.status, b.PGA_MEMORY
FROM v$session e
LEFT JOIN
(select y.SID, y.value pga,
TO_CHAR(ROUND(y.value/1024/1024),99999999) || ' MB' PGA_MEMORY
from v$sesstat y, v$statname z
where y.STATISTIC# = z.STATISTIC# and NAME = 'session pga memory') b
ON e.sid=b.sid
WHERE (PGA)/1024/1024 > 20
ORDER BY 4 DESC;
It seems that the resource usage goes up any time a table is opened in SQLDeveloper, but even when it is closed the memory does not go away. The problem is worse if the table is sorted while it was open as that seems to use even more memory. I understand how this would use memory while it is sorting, and perhaps even while it is still open, but to use memory after it is closed seems wrong to me. Can anyone confirm this?
Update:
I discovered that my numbers were off due to not understanding that the UGA is stored in the PGA under dedicated server mode. This makes the numbers lower than they were, but the problem still remains that SQL Developer seems to use excessive PGA.
Perhaps SQL Developer doesn't close the cursors it had opened.
So if you run a query which sorts a million rows and SQL Developer fetches only first 20 rows from there, it needs to keep the cursor open should you want to scroll down and fetch more.
So, it needs to keep some of the PGA memory associated with the cursor's sort area still allocated (it's called retained sort area) as long as the cursor is open and hasn't reached EOF (end-of-fetch).
Pick a session and run:
select sql_id,operation_type,actual_mem_used,max_mem_used,tempseg_size
from v$sql_workarea_active
where sid = &SID_OF_INTEREST
This should show whether some cursors are still kept open with their memory...
Are you using Automatic Memory Management? If yes, I would not worry about the PGA memory used.
See docs:
Automatic Memory Management: http://download.oracle.com/docs/cd/B28359_01/server.111/b28310/memory003.htm#ADMIN11011
MEMORY_TARGET: http://download.oracle.com/docs/cd/B28359_01/server.111/b28320/initparams133.htm
Is there a reason you are using 32 bit Oracle? Most recent hardware supports 64 bit.
Oracle, especially with AMM, will use every bit of memory on the machine you give it. If it doesn't have a reason to de-allocate memory it will not do so. It is the same with storage space: if you delete 20 GB of user data that space is not returned to the OS. Oracle will hold on to it unless you explicitly compact the tablespaces.
I believe a simple test should relieve your concerns. If it's 32 bit, and each SQL Developer session is using 100MB+ of RAM, then you'd only need a few hundred sessions open to cause a low-memory problem...if there really is one.