We are using oracle 19c
, Recently we had altered the PGA_AGGREGATE_LIMIT value TO 20GB but after a week we see that it is 10GB
Is there a way to check when was this altered .
Regards
Tejas
Related
When I take a look at the column-comments in our Data Lake (Hadoop, comments made during parquet-table creation with Hive or Impala) they are cut of after ~200 characters.
Might this be a global character-setting in our hadoop-system or some Hive-restriction? If not, is there a way to set the maximum-string-length for comments during the table creation? Unfortunately, I have no admin-access to the system itself and, therefore, restricted insights.
Column comments are stored in Hive Metastore table COLUMNS_V2, in a column called COMMENT.
Currently, the size of that column is limited to 256 characters (see MySQL metastore schema definition for Hive version 3.0.0 for example).
In the upcoming 4.0 (?) version, it seems to have been expanded to varchar(4000), but associated Hive JIRA-4921 is still listed as unresolved, and doesn't mention a target release #.
I am using Oracle XE 10.2. I am trying to copy 2,653,347 rows from a remote database with the statement
INSERT INTO AUTOSCOPIA
(field1,field2...field47)
SELECT * FROM AUTOS#REMOTE;
I am trying to copy all 47 columns for all 2 million rows locally. After running for a few minutes, however, I get the error:
ORA- 12952 : The request exceeds the maximum allowed database size of 4 GB data.
How can I avoid this error?
Details: i have 3 indexes in my local table (where i want to insert the remote information).
You're using the express edition of Oracle 10.2 which includes a number of limitations. The one that you're running into is that you are limited to 4 GB of space for your tables and your indexes.
How big is the table in GB? If the table has 2.6 million rows, if each row is more than ~1575 bytes, then what you want to do isn't possible. You'd have to either limit the amount of data you're copying over (not getting every row, not getting every column, or not getting all the data in some columns would be options) or you would need to install a version and edition that allows you to store that much data. The express edition of 11.2 allows you to store 11 GB of data and is free just like the express edition of 10.2 so that would be the easiest option. You can see how much space the table consumes in the remote database by querying the all_segments column in the remote database-- that should approximate the amount of space you'd need in your database
Note that this ignores the space used by out-of-line LOB segments as well as indexes
SELECT sum(bytes)/1024/1024/1024 size_in_gb
FROM all_segments#remote
WHERE owner = <<owner of table in remote database>>
AND segment_name = 'AUTOS'
If the table is less than 4 GB but the size of the table + indexes is greater than 4 GB, then you could copy the data locally but you would need to drop one or more of the indexes you've created on the local table before copying the data over. That, of course, may lead to performance issues but you would at least be able to get the data into your local system.
If you (or anyone else) has created any tables in this database, those tables count against your 4 GB database limit as well. Dropping them would free up some space that you could use for this table.
Assuming that you will not be modifying the data in this table once you copy it locally, you may want to use a PCTFREE of 0 when defining the table. That will minimize the amount of space reserved in each block for subsequent updates.
I want to drop a partition that is empty but I am aware about oracle setting all indexes to unusable whenever you perform a partition DDL statement like DROP, therefore, I should add UPDATE GLOBAL INDEXES to the statement though it looks unnecessary.
Then I came up with this post where it says that it wont mark it as unusable so I decided to test it. The thing is that I tested it in two oracle versions and it worked different!
Having two instances:
DBa(Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production)
DBb(Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production)
In DBa it marked them as invalid and in DBb which contained the same data than the other db (cloned with exp/imp) it succeed to drop without marking them unusable.
Is it possible to explicitly tell Oracle that you want to keep the indexes usable because there is no data in the partition (without rebuilding the indexes) ?
So far I am not able to find out why it was marked as invalid in a placed but not in the other one but there is something to say in case someone have the same problem.
Run it always with UPDATE GLOBAL INDEXES since if the partition is empty it will take no time to perform the drop and it ensures that the indexes will not be marked as invalid. Therefore, there is no reason to hope that oracle won't mark them
May be you can try below, this maintains index validity during the drop.
ALTER TABLE t1 DROP PARTITION p5 UPDATE GLOBAL INDEXES;
yes .. use LOCAL indexes while creating indexes over partitioned table
The simple version of this question is: Is it possible to export all the data from an Oracle 10g XE database that has reached it's 4GB maximum size limit?
Now, here's the background:
My (Windows) Oracle 10g XE database has reached its maximum allowed database size of 4GB. The solution I intended to implement to this problem was to upgrade to Oracle 11g XE which has a larger maximum size limit, and better reflects our production environment anyway. Of course, in typical Oracle fashion, they do not have an upgrade-in-place option (at least not that I could find for XE). So I decided to follow the instructions in the "Importing and Exporting Data between 10.2 XE and 11.2 XE" section of the Oracle 11g XE Installation Guide. After fighting with SQLPlus for a while, I eventually reached step 3d of the instructions which instructs the user to enter the following (it doesn't specify the command-line rather than SQLPlus, but it means the command-line):
expdp system/system_password full=Y EXCLUDE=SCHEMA:\"LIKE \'APEX_%\'\",SCHEMA:\"LIKE \'FLOWS_%\'\" directory=DUMP_DIR dumpfile=DB10G.dmp logfile=expdpDB10G.log
That command results in the following output:
Export: Release 10.2.0.1.0 - Production on Thursday, 29 September, 2011 10:19:11
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
ORA-31626: job does not exist
ORA-31633: unable to create master table "SYSTEM.SYS_EXPORT_FULL_06"
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPV$FT", line 863
ORA-12952: The request exceeds the maximum allowed database size of 4 GB
I have deleted quite a bit of data from the USERS tablespace, but am unable to resize it because of the physical locations of the data. And no matter what I do, I always get that same output. I have tried running "Compact Storage" from the admin web application with no effect.
So my question is, am I missing something? Or is Oracle really so incompetent as to leave people completely out of luck if their XE databases fill up?
You can get to the point you need to export data, sounds like you just need some help coalescing the data so you can reduce the USERS tablespace size and increase the SYSTEM tablespace size to get past your issue.
You mentioned that you removed data from the USERS tablespace but can't resize. Since you can't reduce the tablespace size smaller than the highest block, reorganiza your table data by executing the following command for each table:
ALTER TABLE <table_name> MOVE <tablespace_name>;
The tablespace name can be the same tablespace that the table currently lives in, it will still reorganize the data and coalesce the data.
This statement will give you the text for this command for all the tables that live in USERS tablespace:
select 'ALTER TABLE '||OWNER||'.'||TABLE_NAME||' MOVE '||TABLESPACE_NAME||';' From dba_tables where tablespace_name='USERS';
Indexes will also have to be rebuilt (ALTER INDEX REBUILD;) as the MOVE command invalidates them because it changes physical organization of the table data (blocks) instead of relocating row by row.
After the data is coalesced you can resize the USERS tablespace to reflect the data size.
Is it a pain? Yes. Is Oracle user friendly? They would love you to think so but its really not, especially when you hit some weird corner case that keeps you from doing the types of things you want to do.
As you can see, you need some free space in the SYSTEM tablespace in order to export, and Oracle XE refuses to allocate it because the sum of SYSTEM+USERS has reached 4 Gb.
I would try to install an Oracle 10gR2 Standard Edition instance on a similar architecture, then shutdown Oracle XE and make a copy your existing USERS data file. Using "ALTER TABLESPACE" commands on the standard edition, you should be able to link the USERS tablespace to your existing data file, then export the data.
Until recently I thought limit on number of columns in Oracle DB was 255. But turns out the limit is 1000. Can someone confirm this?
Also I was trying to find if there is any similar limit on number of columns in Derby DB, particularly embedded derby java DB
Here's link to Oracle documentation: Logical Database Limits.
Excerpt:
Per table 1000 columns maximum
Per index (or clustered index) 32 columns maximum
Per bitmapped index 30 columns maximum
Here's link to Derby documentation: A Derby Database
Excerpt:
columns per table 1,012