The query that gives me this error was running for 6 months now and it was working fine. Today for some reason gave me this error:
Error in running query because of SQL Error, Code=1652, Message=ORA-01652: unable to extend temp segment by 16 in tablespace PSTEMP (50,380).
I don't want to extend "PSTEMP" file. The query shouldn't be the problem as I mentioned it worked fine until now.
I don't know if that will help but the query has prompt value and if I enter a wrong value it works fine but when I enter the value from last week I know it should return 16 rows but instead I get the above error.
You can check your temp space with
SELECT * FROM dba_temp_free_space;
but it might not necessarily be temp despite the error message.
Check your tablespace free space with:
select a.tablespace_name,sum(a.tots/1048576) Tot_Size,
sum(a.sumb/1048576) Tot_Free,
round(sum(a.sumb)*100/sum(a.tots),2) Pct_Free,
sum(a.largest/1024) Max_Free,sum(a.chunks) Chunks_Free
from
(
select tablespace_name,0 tots,sum(bytes) sumb,
max(bytes) largest,count(*) chunks
from dba_free_space a
group by tablespace_name
union
select tablespace_name,sum(bytes) tots,0,0,0 from
dba_data_files
group by tablespace_name) a
group by a.tablespace_name
order by pct_free;
Most likely, your SQL became too heavy as the underlying data grew. You can try optimizing the SQL or if that's not an option, ask the DBAs to increase the undo tablespace (PSTEMP).
Related
I created a table in oracle like
CREATE TABLE suppliers AS (SELECT * FROM companies WHERE id > 1000);
I would like to know the complete select statement which was used to create this table.
I have already tried get_ddl but it is not giving the select statement. Can you please let me know how to get the select statement?
If you're lucky one of these statements will show the DDL used to generate the table:
select *
from gv$sql
where lower(sql_fulltext) like '%create table suppliers%';
select *
from dba_hist_sqltext
where lower(sql_text) like '%create table%';
I used the word lucky because GV$SQL will usually only have results for a few hours or days, until the data is purged from the shared pool. DBA_HIST_SQLTEXT will only help if you have AWR enabled, the statement was run in the last X days that AWR is configured to hold data (the default is 8), the statement was run after the last snapshot collection (by default it happens every hour), and the statement ran long enough for AWR to think it's worth saving.
And for each table Oracle does not always store the full SQL. For security reasons, DDL statements are often truncated in the data dictionary. Don't be surprised if the text suddenly cuts off after the first N characters.
And depending on how the SQL is called the case and space may be different. Use lower and lots of wildcards to increase the chance of finding the statement.
TRY THIS:
select distinct table_name
from
all_tab_columns where column_name in
(
select column_name from
all_tab_columns
where table_name ='SUPPLIERS'
)
you can find table which created from table
I have some issues with how sqlplus output is formatted for the terminal and I am just thinking of writing a script around sqlplus and fixing these.
On the other hand, wow that seems really lame. Because Oracle has several tons of tools written. Yet it seems difficult to get what I want. Does anyone have another suggestion?
First, I want smarter column widths. If I create a table with a column whose max size is 200 characters but then I put "abc", "xyz" and "123" in it, do I need a 200-space wide column on the terminal to display the contents? I do not think so. I think I need 3 characters plus a couple for padding. Yet Oracle insists on giving me a 200-character wide column. Unless there is somewhere to fix this.
Second, I want easy access to a sideways display of the columns, like using \G at the end of the command in MySQL. I know there is a way to do this in Oracle but it seemed complicated. Why could there not just be a simple switch? Like a \G at the end of the command? There can be if I wrap the output to sqlplus and do this myself.
So, the question seems to be this. Should I write a simple script around sqlplus to give me what I want, or is there a way to get Oracle to give me this behavior In sqlplus? And if there is, how much extra information will I have to stuff into my head to make this work? Because it does not seem as though it should be very complicated. But Oracle is certainly not making it easy.
First of all I suggest you look over the SQL*plus reference - you might find some useful tips there like adjusting a column width
COL column_name for a20
you can set up your own settings in the GLOGIN file. over time, like any other CMD, you'll get your preferences just right.
To describe a table you can use DESC. if you want more data write your own script and reuse it with #.
If all this doesn't work for you, you can always switch to a GUI like Toad or SQL developer.
EDIT:
I'm adding one of my own scripts to show you some tricks on how to make SQL*Plus more friendly on the command line. This one is for getting segment sizes.
/* This is the trick - clears &1 and &2 if received an empty string */
set ver off feed off
col 1 new_v 1
col 2 new_v 2
select 1,2 from dual where 1=0;
variable p_owner varchar2(30)
variable p_segment varchar2(30)
/* set bind variables */
begin
:p_owner := '&1';
:p_segment := '&2';
end;
/
set feed 1
break on segment_type skip 1
column MB for a25
select
segment_type,
decode(gi_segment_name + gi_segment_type + gi_tablespace_name , 3 ,'...Grand Total', segment_name) SEGMENT_NAME,
to_char(round(MB,3),'99,999,999.99') MB ,
nvl(tablespace_name,'-*-') tablespace_name
from (
select tablespace_name , segment_type , segment_name , sum(bytes/1024/1024) MB ,
grouping_id(segment_name) gi_segment_name ,
grouping_id(segment_type) gi_segment_type ,
grouping_id(segment_type) gi_tablespace_name
from dba_segments
where ((:p_owner is null and owner = user) or owner like upper(:p_owner))
and (:p_segment is null or segment_name like upper('%'||:p_segment||'%'))
group by rollup(tablespace_name, segment_type , segment_name)
)
where not (gi_segment_name = 1 and gi_segment_type = 0 and gi_tablespace_name = 0)
order by decode(segment_type,'TABLE','0','TABLE PARTITION','1','INDEX','2','INDEX PARTITION','3',segment_type) ,
(case when segment_name like '.%' then 'z' else 'a' end) ,
gi_segment_name ,
MB desc ,
segment_name;
clear break
/* clear definition for &1 and &2 after being used.
allows the variable to be null the next run. */
undefine 1
undefine 2
I'll walk you through some of the things iv'e done here
The script accepts two parameters. The first 4 lines clears the
parameter if none received. if you don't do this SQL*Plus will prompt
you for them. And we dont want that.
Setting the binds was more of a big deal in past version. It's
intended to save Hard / Soft parse. latest version solve this
problem. It's still a best practice though.
The break is a nice touch. You'll see it.
The grouping Id show me the sub totals on several levels.
I've added two parameter, owner and segment name. both can contain
wild card. both can be null. If non provided the query will fetch the
current user segments.
Order by decode enabled me to set a custom sort order for different
segment types. You can change it as you wish.
I'm executing the script like this
my segments :
#seg
Scott's segments
#seg scott
Scott's Emp related segments
#seg scott emp
I have similar scripts for session, longops, wait events, tables, constraints, locks, kill session etc .... during my daily routine i rarely write SQL for querying this stuff any more.
I made terrible mistake in SQL index creation:
create index IDX_DATA_TABLE_CUSECO on DATA_TABLE (CUSTOMER_ID, SESSION_ID, CONTACT_ID)
tablespace IDX_TABLESPACE LOCAL ;
As You can see I missed keyword "ONLINE" to create index without blocking PRODUCTION table with high usage with 600m+ records. Corrected SQL is:
create index IDX_DATA_TABLE_CUSECO on DATA_TABLE (CUSTOMER_ID, SESSION_ID, CONTACT_ID)
tablespace IDX_TABLESPACE LOCAL ONLINE;
I was done it under PL/SQL Developer. When I was trying to stop it program stop responding and crashed.
Production system not working for 9 hours now and my boss wanna explode. :D
Is there any chance to see how many seconds/minutes/hours Oracle 11g left to process this index creation ? Or maybe is there any chance to see does Oracle still working on this request? (PL/SQL Developer crashed).
For haters:
I know I should do this like mentioned here: (source)
CREATE INDEX cust_idx on customer(id) UNUSABLE LOCAL;
ALTER INDEX cust_idx REBUILD parallel 6 NOLOGGING ONLINE;
You should be able to view the progress of the operation in V$SESSION_LONGOPS
SELECT sid,
serial#,
target,
target_desc,
sofar,
totalwork,
start_time,
time_remaining,
elapsed_seconds
FROM v$session_longops
WHERE time_remaining > 0
Of course, in a production system, I probably would have killed the session hours ago rather than letting the DDL operation continue to prevent users from accessing the application.
I have a partitioned table like so:
create table demo (
ID NUMBER(22) not null,
TS TIMESTAMP not null,
KEY VARCHAR2(5) not null,
...lots more columns...
)
The partition is on the TS column (one partition per year).
Since I search a lot via the timestamp, I created a combined index:
create index demo.x1 on demo (ts, key);
The query looks like this:
select *
from demo t
where t.TS = to_timestamp('2009-06-30 07:47:57', 'YYYY-MM-DD HH24:MI:SS')
I also tried to add and t.KEY = '00101' but that doesn't help.
But EXPLAIN PLAN says that TABLE ACCESS and FULL:
# Operation Options Object Mode Cost Bytes Cardinality
0 SELECT STATEMENT ALL_ROWS 583804 287145 2127
1 PARTITION RANGE ALL 583804 287145 2127
2 TABLE ACCESS FULL HEADER ANALYZED 583804 287145 2127
No mention of the index. What could be wrong?
[EDIT] For some reason, Oracle completely miscalculated the cost for the operation. I have 112 million rows in that table. The cost for a full scan of a single partition should be 20 million, not 600'000. That's why it even ignores optimizer hints.
[EDIT2] During my tests, I ran over this puzzling result. When I run this select:
select tx_ts
from kt.header
where tx_ts = to_timestamp('2009-06-30 07:47:57', 'YYYY-MM-DD HH24:MI:SS')
I get this EXPLAIN PLAN:
0 SELECT STATEMENT ALL_ROWS 152 15616 1952
1 PARTITION RANGE ALL 152 15616 1952
2 INDEX FAST FULL SCAN HEADERX2 ANALYZED 152 15616 1952
So when I restrict myself to the indexed column as the result of the select, Oracle decides to use the index. When I want to get all columns, I have to wait for a full table scan. What's going on here?
[EDIT2] Found it; see my answer below.
Okay, it was a mistake on my part: The column had the type DATE, not TIMESTAMP. Since I used to_timestamp(), Oracle saw no way to use the index.
I'm not really an expert on partitioning, but I think what has happened here is that you have created a global index -- a single index that covers rows in all of the partitions. Therefore, the optimizer has to choose between two mutually exclusive access paths: (A) an index range scan, or (B) partition pruning. I believe the PARTITION RANGE operation indicates that it has chosen B.
Updating statistics, as others have suggested, may change the behavior. When you drop and recreate the index, you discard any statistics that existed for the index.
Creating the index as UNIQUE, if the timestamp and key uniquely identify a row, would be a good idea and might change the behavior as well.
However, I think the real "fix" is that you should instead create local indexes -- a separate index on each partition. This should enable the optimizer to do partition pruning followed by an index lookup. Honestly, I'm not sure what the exact syntax is to do this. Maybe you just create the index on each partition explicitly using the individual partition names.
If everything else fails, you might try an optimizer hint:
select /*+ index(demo.demo demo.x1) */ *
from demo t
where t.TS = to_timestamp('2009-06-30 07:47:57', 'YYYY-MM-DD HH24:MI:SS')
Are your stats up to date? Invalid stats may mean that oracle believes a full table scan is faster than using the index. Are you using any hints in your query that might be telling oracle to do a full scan?
Can you supply us with the full query and explain plan results?
Edit: Aaron, you can update the stats using "dbms_stats.gather_schema_stats" or "dbms_stats.gather_table_stats" commands. See here for more information on the commands. This will update all the relevant stats for the schema or table specified. Oracle's Cost Based Optimiser will use the statistics to determine which execution plan to choose. It never uses the actual table sizes. You'll need to re-update your stats when the size of your table changes significantly ( +/- 10% or so)
Another thing. When you use a compound index, you need to specify all the columns used in the index in your query for the optimizer to consider the index (and I think you need to specify them in the same order as well, though I could be wrong about that, it's been a while since I looked at this stuff)
There may just be a typo in your transcription of the "CREATE INDEX..." statement that you posted, but are you sure you actually have created the index?
To give us some first-pass idea of the statistics, use these queries:
select table_name, num_rows
from user_tables
where table_name = 'DEMO';
select table_name, num_rows
from user_tab_partitions
where table_name = 'DEMO';
select index_name, num_rows from user_indexes
where table_name in
(select table_name
from user_tables where table_name = 'DEMO');
Also, exactly how are you generating the EXPLAIN PLAN? Do you have access to the database host to retrieve a trace file if you enable tracing?
[edit]
As I commented, it would be good to see the trace of an actual execution of the query. Since you've indicated you have access to the db host filesystems, run a SQL script that (in the same session) issues the following:
alter session set sql_trace=true;
select /* THIS IS THE TRACE */
*
from demo t
where t.TS = to_timestamp('2009-06-30 07:47:57', 'YYYY-MM-DD HH24:MI:SS');
exit
After the script exits, find out
where the trace file directory is by
this query:
select value from v$parameter where name = 'user_dump_dest';
Use whatever searching tool is
available to you to find the file
that includes the string "THIS IS THE
TRACE"
Process/profile the trace file by
issuing the OS command tkprof
traceFileName.trc tkprof.out
Examine this file - you'll see some overhead information, but there will be a section that details the actual execution plan and statistics for the query. If you see the same results in this information then the next step is to add another statement (after the first "alter session") that will dump information on why the Oracle CBO is ignoring the index:
ALTER SESSION SET EVENTS='10053 trace name context forever, level 1';
I just had an oracle9 database crash and it left me with a couple of .trc files. Some of them specified indexes that were out of kilter and i dropped and readded those indexes.
However, when I run:
ANALYZE TABLE TABLESPACE.TABLE VALIDATE STRUCTURE CASCADE;
I still get an error: ora_00900, sqlstate: 4200
This creates a .trc file with:
Table/Index row count mismatch
table 1172 : index 1250, 0
Index root = tsn: 9 rdba: 0x0240390b
What do I do with this information?
I found this link, however I'm not sure how to use it:
http://www.freelists.org/post/oracle-l/Table-index-mismatch-trace-file,1
The error says your indexes (perhaps not the ones you thought) are still bad.
From your link, if you run the query through SQL*PLUS it will ask for a rdba number. Enter the value from your error message '0x0240390b' (no quotes). This will return a file number and a block number.
SELECT dbms_utility.data_block_address_file(
to_number(trim(leading '0' from
replace('&&rdba','0x','')),'XXXXXXXX')
) AS rfile#,
dbms_utility.data_block_address_block(
to_number(trim(leading '0' from
replace('&&rdba','0x','')),'XXXXXXXX')
) AS block#
FROM dual;
Next run the following query:
select owner, segment_name, segment_type
from dba_segments
where header_file = <rfile#>
and header_block = <block#>
This will give you the offending index to be dropped and recreated.
To be honest, with an error like this I would recommend opening an SR with Oracle - you want to make sure you don't lose your data!