Find table name from v$datafile . name colum - oracle

When you look at wait events (i.e. with Toad), you see a file# parameter.
How can I get more useful information as the table name.
Is it possible to know even the number of records that are read by that table?
In another forum I found this advice, but it doesn't seem to work.
select segment_name
from dba_extents ext
where ext.file_id = 828
and 10711 between ext.block_id and ext.block_id + ext.blocks - 1
and rownum = 1

Let's talk files, blocks, segments and extents.
A segment is a database object that is stored. It may be a table, index, (sub)partition, cluster or LOB. Mostly you'll be interested in tables and indexes.
A segment is made up of extents. If you think of a segment as a book, an extent is a chapter. A segment (generally) starts with at least one extent. When it needs to store more data and it doesn't have room in the existing extents, it adds another extent to the segment.
An extent lives in a datafile. A datafile can have lots of extents each starting at a different point in the file and having a size. You may have one extent of 15 blocks starting in file 1 at block 10.
A wait event should identify the file and block (and row). If your wait event is for file #1 and block 12 you go off to USER_EXTENTS (or DBA_EXTENTS) and look for the extent in file# 1 where 12 is between the starting block location and the starting block location plus the number of blocks. So block 12 would between starting block 10 and end block 25 (start plus size).
Once you've identified the extent, you track it back to its parent segment (USER_SEGMENTS / DBA_SEGMENTS) which will give you the table/index name.
A theoretical SQL is as follows :
select username, sid, serial#,
row_wait_obj#, row_wait_file#, row_wait_block#, row_wait_row#,
ext.*
from v$session s
join dba_extents ext on ext.file_id = row_wait_file#
and row_wait_block# between ext.block_id and ext.block_id + ext.blocks - 1
where username = 'HR'
and status = 'ACTIVE'
For this one I purposefully blocked a session so that it was waiting on a row lock.
828 is a rather large file id. It isn't impossible, but it is unusual. Do a select from DBA_DATA_FILES and see if you have such a file. If not, and you've only got a few files, look at all the objects that match the "10711 between ext.block_id and ext.block_id + ext.blocks - 1" criteria without the file id. You should be able to find a likely candidate from there.
The exception is if the problem was on a temporary segment. Since these get disposed of at the end of the operation, there's no permanent object recorded. In that cases the 'name' of the table/index isn't applicable and you need to tackle any performance issue another way (eg look at the SQL and its explain plan and work out whether it is correct in using lots of temp space).

Related

concurrent statistics gathering on Oracle 11g partiitioned table

I am developing a DWH on Oracle 11g. We have some big tables (250+ million rows), partitioned by value. Each partition is a assigned to a different feeding source, and every partition is independent from others, so they can be loaded and processed concurrently.
Data distribution is very uneven, we have partition with millions rows, and partitions with not more than a hundred rows, but I didn't choose the partitioning scheme, and by the way I can't change it.
Considered the data volume, we must assure that every partition has always up-to-date statistics, because if the subsequent elaborations don't have an optimal access to the data, they will last forever.
So for each concurrent ETL thread, we
Truncate the partition
Load data from staging area with
SELECT /*+ APPEND */ INTO big_table PARTITION(part1) FROM temp_table WHERE partition_colum = PART1
(this way we have direct path and we don't lock the whole table)
We gather statistics for the modified partition.
In the first stage of the project, we used the APPROX_GLOBAL_AND_PARTITION strategy and worked like a charm
dbms_stats.gather_table_stats(ownname=>myschema,
tabname=>big_table,
partname=>part1,
estimate_percent=>1,
granularity=>'APPROX_GLOBAL_AND_PARTITION',
CASCADE=>dbms_stats.auto_cascade,
degree=>dbms_stats.auto_degree)
But, we had the drawback that, when we loaded a small partition, the APPROX_GLOBAL part was dominant (still a lot faster than GLOBAL) , and for a small partition we had, e.g., 10 seconds of loading, and 20 minutes of statistics.
So we have been suggested to switch to the INCREMENTAL STATS feature of 11g, which means that you don't specify the partition you have modified, you leave all parameters in auto, and Oracle does it's magic, automatically understanding which partition(s) have been touched. And it actually works, we have really speeded up the small partition. After turning on the feature, the call became
dbms_stats.gather_table_stats(ownname=>myschema,
tabname=>big_table,
estimate_percent=>dbms_stats.auto_sample_size,
granularity=>'AUTO',
CASCADE=>dbms_stats.auto_cascade,
degree=>dbms_stats.auto_degree)
notice, that you don't pass the partition anymore, and you don't specify a sample percent.
But, we're having a drawback, maybe even worse that the previous one, and this is correlated with the high level of parallelism we have.
Let's say we have 2 big partition that starts at the same time, they will finish the load phase almost at the same time too.
The first thread ends the insert statement, commits, and launches the stats gathering. The stats procedure notices there are 2 partition modified (this is correct, one is full and the second is truncated, with a transaction in progress), updates correctly the stats for both the partitions.
Eventually the second partition ends, gather the stats, it see all partition already updated, and does nothing (this is NOT correct, because the second thread committed the data in the meanwhile).
The result is:
PARTITION NAME | LAST ANALYZED | NUM ROWS | BLOCKS | SAMPLE SIZE
-----------------------------------------------------------------------
PART1 | 04-MAR-2015 15:40:42 | 805731 | 20314 | 805731
PART2 | 04-MAR-2015 15:41:48 | 0 | 16234 | (null)
and the consequence is that I occasionally incur in not optimal plans (which mean killing the session, refresh manually the stats, manually launch the precess again).
I tried even putting an exclusive lock on the gathering, so no more than one thread can gather stats on the same table at once, but nothing changed.
IMHO this is an odd behaviour, because the stats procedure, the second time it is invoked, should check for the last commit on the second partition, and should see it's newer than the last stats gathering time. But seems it's not happening.
Am I doing something wrong? Is it an Oracle bug? How can I guarantee that all stats are always up-to-date with incremental stats feature turned on, and an high level of concurrency?
I managed to reach a decent compromise with this function.
PROCEDURE gather_tb_partiz(
p_tblname IN VARCHAR2,
p_partname IN VARCHAR2)
IS
v_stale all_tab_statistics.stale_stats%TYPE;
BEGIN
BEGIN
SELECT stale_stats
INTO v_stale
FROM user_tab_statistics
WHERE table_name = p_tblname
AND object_type = 'TABLE';
EXCEPTION
WHEN NO_DATA_FOUND THEN
v_stale := 'YES';
END;
IF v_stale = 'YES' THEN
dbms_stats.gather_table_stats(ownname=>myschema,
tabname=> p_tblname,
partname=>p_partname,
degree=>dbms_stats.auto_degree,
granularity=>'APPROX_GLOBAL AND PARTITION') ;
ELSE
dbms_stats.gather_table_stats(ownname=>myschema,
tabname=>p_tblname,
partname=>p_partname,
degree=>dbms_stats.auto_degree,
granularity=>'PARTITION') ;
END IF;
END gather_tb_partiz;
At the end of each ETL, if the number of added/deleted/modified rows is low enough not to mark the table as stale (10% by default, can be tuned with STALE_PERCENT parameter), I collect only partition statistics; otherwise i collect global and partition statistics.
This keeps ETL of small partition fast, because no global partition must be regathered, and big partition safe, because any subsequent query will have fresh statistics and will likely use an optimal plan.
Incremental stats is anyway enabled, so whenever the global has to be recalculated, it is pretty fast because aggregates partition level statistics and does not perform a full scan.
I am not sure if, with incremental enabled, "APPROX_GLOBAL AND PARTITION" and "GLOBAL AND PARTITION" do differ in something, because both incremental and approx do basically the same thing: aggregate stats and histograms without doing a full scan.
Have you tried to have incremental statistics on, but still explicitly name a partition to analyze?
dbms_stats.gather_table_stats(ownname=>myschema,
tabname=>big_table,
partname=>part,
degree=>dbms_stats.auto_degree);
For your table, stale (yesterday's) global stats are not as harmful as completely invalid partition stats (0 rows). I can propose 2 a bit alternative approaches that we use:
Have a separate GLOBAL stats gathering executed by your ETL tool right after all partitions are loaded. If it's taking too long, play with estimate_percent as dbms_stats.auto_degree will likely to be more than 1%
Gather the global (as well as all other stale) stats in a separate database job run later during the day, after all data is loaded into DW.
The key point is that stale statistics which differ only slightly from fresh are almost just as good. If statistics show you 0 rows, they'll kill any query.
Considering what you are trying to achieve, you need to run stats on specific intervals of time for all Partitions and not at the end of the process that loads each partition. It could be challenging if this is a live table and has constant data loads happening round the clock, but since these are LARGE DW tables I really doubt that's the case. So the best bet would be to collect stats at the end of loading all partitions, this will ensure that the statistics is collected for partitions where data has change or statistics are missing and update the global statistics based on the partition level statistics and synopsis.
However to do so, you need to turn on incremental feature for the table (11gR1).
EXEC DBMS_STATS.SET_TABLE_PREFS('<Owner>','BIG_TABLE','INCREMENTAL','TRUE');
At the end of every load, gather table statistics using GATHER_TABLE_STATS command. You don't need to specify the partition name. Also, do not specify the granularity parameter.
EXEC DBMS_STATS.GATHER_TABLE_STATS('<Owner>','BIG_TABLE');
Kindly check if you have used DBMS_STATS to set table preference to gather incremental statistics.This oracle blog explains that statistics will be gathered after each row affected.
Incremental statistics maintenance needs to gather statistics on any partition that will change the global or table level statistics. For instance, the min or max value for a column could change after just one row is inserted or updated in the table
BEGIN
DBMS_STATS.SET_TABLE_PREFS(myschema,'BIG_TABLE','INCREMENTAL','TRUE');
END;
I'm a bit rusty about it, so first of all a question:
did you try serializing partition loading? If so, how long and how well does statistics run? Notice that since loading time is so much smaller than statistics gathering, i guess this could also act as a temporary workaround.
Append hint does affects redo size, meaning the transaction just traces something, thus statistics may not reckon new data:
http://oracle-base.com/articles/misc/append-hint.php
Thinking out loud: since the direct path insert does append rows at the end of the partition and eventually updates metadata at the end, the already running thread gathering statistics could have read non-updated (stale) data. Thus it may not be a bug, and locking threads would accomplish nothing.
You may test this behaviour temporarily switching your table/partition to LOGGING, for instance, and see how it works (slower, of course, but it's a test). Can you do it?
EDIT: incremental stats should work anyway, even disabling a parallel statistics gathering, since it reiles on the incremental values no matter how they were collected:
https://blogs.oracle.com/optimizer/entry/incremental_statistics_maintenance_what_statistics

after alter system flush shared_pool low performance Oracle

We did refactoring and replaced 2 similar requests with parameterized request
a.isGood = :1
after that request that used this parameter with parameter 'Y' was executed longer that usually (become almost the same with parameter 'N'). We used alter system flush shared_pool command and request for parameter 'Y' has completed fast (as before refactoring) while request with parameter 'N' hangs for a long time.
As you could understand number of lines in data base with parameter 'N' much more then with 'Y'
Oracle 10g
Why it happened?
I assume that you have an index on that column, otherwise the performance would be the same regardless of the Y/N combination. I have seen this happening quite bit on 10g+ due to Oracle's optimizer Bind Peeking combined to histograms on columns with skewed data distribution. The histograms get created automatically when one gathers tables statistics using the parameter method_opt with 'FOR ALL COLUMNS SIZE AUTO' (among other values). Oracle optimizes the query for the value in the bind variables provided in the very first execution of that query. If you run the query with Y the first time, Oracle might want to use an index instead of a full table scan, since Y will return a small quantity of rows. The next time you run the query with N, then Oracle will repeat the first execution plan, which happens to be a poor choice for N, since it will return the vast majority of rows.
The execution plans are cached in the SGA. Once you flush it, you get a brand new execution plan the very first time the query runs again.
My suggestion is:
Obtain the explain plan of both original queries (one with a hardcoded Y and one with a hardcode N). Investigate if the two plans use different indexes or one has a much higher Cost than the other. I have the feeling that one uses a full table scan and the other uses an index. The first one should be faster for N and the second should be faster for Y.
Try to remove the statistics on the table and see if it makes a difference on the query that has the bind variable. Later you need to gather statistics again for the table or other queries on that table might suffer.
You can also gather statistics for that one table using method_opt => FOR ALL COLUMNS SIZE 1. That will keep the statistics without the histograms on any columns of that table.
A bitmap index on this column might fix the issue as well. Indexes on a column that have only two possible values (Y and N) are not exactly very efficient.
If column isGood has 99,000 'N' values and 1,000 'Y' values and you run with the condition isGood = 'Y', then it may be appropriate to use an index to find the results: you are returning 1% of the rows. If you run the query with the condition isGood = 'N', a full table scan would be more appropriate since you are returning most of the table anyway. If you were to use an index for the N condition, you would be doing an extra index lookup for every data item lookup.
Although the general rule is that bind parameters are good, it can be problematic in this kind of instance if really two different plans are required for the query. With the bind parameter scenario:
SELECT * FROM x WHERE isGood = :1
The statement will be parsed and a plan computed and saved in the sql cache. The same plan will be used for both query scenarios which is not desirable. But:
SELECT * FROM x WHERE isGood = 'Y'
SELECT * FROM x WHERE isGood = 'N'
will result in two plans being stored in the sql cache, hopefully each with the appropriate plan for the query. Version 11g avoids this problem with adaptive cursor sharing, which can use different plans for different bind variable values.
You need to look at your plans (EXPLAIN PLAN) to see what is happening in your case. Flush the cache, try one method, examine the plan; try the other, examine the plan. It might give you an idea what is happening in your case. There are a bunch of other topics you might follow up on that may help, for example:
using a hint to force the use of an index
cursor_sharing parameter
histograms on statistics

Oracle insert performs too long

I'm confused about time Oracle 10g XE performs insert. I implemented bulk insert from xml file into several tables with programmatical transaction managment. Why one insert performs in a moment and another more than 10 minutes! I can't wait more and stop it. I think there's something more complex I have not payed attention yet.
Update:
I found lock using Monitor.
Waits
Event enq: TX - row lock contention
name|mode 1415053316
usnusnusnusn<<16 | slot 327711
sequence 162
SQL
INSERT INTO ESKD$SERVICESET (ID, TOUR_ID, CURRENCY_ID) VALUES (9, 9, 1)
What does it mean and how should I resolve it?
TX- Enqueues are well known and a quick google will give you a clear answer.
From that article:
1) Waits for TX in mode 6 occurs when a session is waiting for a row level lock that is already held by another session. This occurs when one user is updating or deleting a row, which another session wishes to update or delete. This type of TX enqueue wait corresponds to the wait event enq: TX - row lock contention.
If you have lots of simultaneous inserts and updates to a table you want each transaction to be a short as possible. Get in, get out... the longer things sit in between, the longer the delays for OTHER transactions.
PURE GUESS:
I have a feeling that your mention of "programmatical transaction managment" is that you're trying to use a table like a QUEUE. Inserting a start record, updating it frequently to change the status and then deleting the 'finished' ones. That is always trouble.
This question will be really hard to answer with so little specific information. All that I can tell you is why this could be.
If you are doing an INSERT ... SELECT ... bulk insert then perhaps your SELECT query is performing poorly. There may be a large number of table joins, innefficient use of inline views and other resources that may be negatively impacting the performance of your INSERT.
Try executing your SELECT query in an Explain Plan to see how the Optimizer is deriving the plan and to evaluation the COST of the query.
The other thing that you mentioned was a possible lock. This could be the case however you will need to analyze this with the OEM tool to tell for sure.
Another thing to consider may be that you do not have indexes on your tables OR the statistics on these tables may be out of date. Out of date statistics can GREATLY impact the performance of queries on large tables.
see sites.google.com/site/embtdbo/wait-event-documentation/oracle-enqueues
The locking wait indicates a conflict that could easily be the cause of your performance issues. On the surface it looks likely that the problem is inserting a duplicate key value while the first insert of that key value had not yet committed. The lock you see "enq: TX - row lock contention" happens because one session is trying to modify uncommited data from another session. There are 4 common reasons for this particular lock wait event:
update/delete of the same row
inserting the same uniq key
modifying the same bitmap index chunk
deleting/updating a parent value to a foreign key
We can eliminate the first and last case are you are doing an insert.
You should be able to identify the 2nd if you have no bitmap indexes involved. If you have bitmap indexes involved and you have uniq keys involved then you could investigate easily if you had Active Session History (ASH) data, but unfortunately Oracle XE doesn't. On the other hand you can collected it yourself with S-ASH, see : http://ashmasters.com/ash-simulation/ . With ASH or S-ASH you can run a query like
col event for a22
col block_type for a18
col objn for a18
col otype for a10
col fn for 99
col sid for 9999
col bsid for 9999
col lm for 99
col p3 for 99999
col blockn for 99999
select
to_char(sample_time,'HH:MI') st,
substr(event,0,20) event,
ash.session_id sid,
mod(ash.p1,16) lm,
ash.p2,
ash.p3,
nvl(o.object_name,ash.current_obj#) objn,
substr(o.object_type,0,10) otype,
CURRENT_FILE# fn,
CURRENT_BLOCK# blockn,
ash.SQL_ID,
BLOCKING_SESSION bsid
--,ash.xid
from v$active_session_history ash,
all_objects o
where event like 'enq: TX %'
and o.object_id (+)= ash.CURRENT_OBJ#
Order by sample_time
/
Which would output something like:
ST EVENT SID LM P2 P3 OBJ OTYPE FN BLOCKN SQL_ID BSID
10:41 enq: TX - row lock c 143 4 966081 4598 I1 INDEX 0 0 azav296xxqcjx 144
10:41 enq: TX - row lock c 143 4 966081 4598 I1 INDEX 0 0 azav296xxqcjx 144
10:41 enq: TX - row lock c 143 4 966081 4598 I1 INDEX 0 0 azav296xxqcjx 144
10:41 enq: TX - row lock c 143 4 966081 4598 I1 INDEX 0 0 azav296xxqcjx 144
showing that the object name "OBJ" and the object type "OTYPE" with the contention and that the type is an INDEX. From there you could look up the type of INDEX to verify that it is bitmap.
IF the problem is a bitmap index, then you should probably re-evaluate using bitmap indexes or revisit the way that data is loaded and/or modify to reduce conflicts.
If the problem isn't BITMAP indexes, then it's trying to insert a duplicate key. Some other process had inserted the same key value and not yet committed. Then your process tries to insert the same key value and has to wait for the first session to commit or rollback.
For more information see this link: lock waits
It means, your sequence cache is to small. Increase it.

Commit In loop gives wrong output?

I am trying to insert 1 to 10 numbers except 6and 8 in table messages,but when i fetch it from table mesages1, output is coming in this order
4
5
7
9
10
1
2
3
It should be like this
1
2
3
4
5
7
9
10
According to the logic ,it works fine when i omit commit or put it some where else,
Please explain why it is happening?
this is my code.
BEGIN
FOR i IN 1..10
LOOP
IF i<>6 AND i<>8
THEN
INSERT INTO messages1
VALUES (i);
END IF;
commit;
END LOOP;
END;
select * from messages1;
If you don't use ORDER BY, you should assume the order the results appear in is undefined. Often the results are in the same order they were inserted in, but it's not guaranteed.
Bottom line, if you want your results in some specific order, use ORDER BY.
As Matti says you need the order by clause explicity to guarantee the ordering is returned correctly.
When you have pending changes (ie uncommitted ones) you are the only one able to see them (generally...) this because they haven't been added to the data store where the other data is. Oracle maintains a separate list of pending changes which it uses to alter the results it it gets from the main data store. In your example the changing from this list happens to be returning in order, as there is very little data in the example Oracle presumably isn't needing to split the pending data in any way for optimise its storage.
Once the data is committed it will go into the main database storage and be ordered in any number of possible ways depending on how the table and partition is set up.
So in short, the data is coming from two different places before and after the commit, it just so happens they are returning in different orderings, but don't rely on them not always behaving like that.

How to protect a running column within Oracle/PostgreSQL (kind of MAX-result locking or something)

I'd need advice on following situation with Oracle/PostgreSQL:
I have a db table with a "running counter" and would like to protect it in the following situation with two concurrent transactions:
T1 T2
SELECT MAX(C) FROM TABLE WHERE CODE='xx'
-- C for new : result + 1
SELECT MAX(C) FROM TABLE WHERE CODE='xx';
-- C for new : result + 1
INSERT INTO TABLE...
INSERT INTO TABLE...
So, in both cases, the column value for INSERT is calculated from the old result added by one.
From this, some running counter handled by the db would be fine. But that wouldn't work because
the counter values or existing rows are sometimes changed
sometimes I'd like there to be multiple counter "value groups" (as with the CODE mentioned) : with different values for CODE the counters would be independent.
With some other databases this can be handled with SERIALIZABLE isolation state but at least with Oracle&Postgre the phantom reads are prevented but as the result the table ends up with two distinct rows with same counter value. This seems to have to do with the predicate locking, locking "all the possible rows covered by the query" - some other db:s end up to lock the whole table or something..
SELECT ... FOR UPDATE -statements seem to be for other purposes and don't even seem to work with MAX() -function.
Setting an UNIQUE contraint on the column would probably be the solution but are there some other ways to prevent the situation?
b.r. Touko
EDIT: One more option could probably be manual locking even though it doesn't appear nice to me..
Both Oracle and PostgreSQL support what's called sequences and the perfect fit for your problem. You can have a regular int column, but define one sequence per group, and do a single query like
--PostgreSQL
insert into table (id, ... ) values (nextval(sequence_name_for_group_xx), ... )
--Oracle
insert into table (id, ... ) values (sequence_name_for_group_xx.nextval, ... )
Increments in sequences are atomic, so your problem just wouldn't exist. It's only a matter of creating the required sequences, one per group.
the counter values or existing rows are sometimes changed
You should to put a unique constraint on that column if this would be a problem for your app. Doing so would guarantee a transaction at SERIALIZABLE isolation level would abort if it tried to use the same id as another transaction.
One more option could probably be manual locking even though it doesn't appear nice to me..
Manual locking in this case is pretty easy: just take a SHARE UPDATE EXCLUSIVE or stronger lock on the table before selecting the maximum. This will kill concurrent performance, though.
sometimes I'd like there to be multiple counter "value groups" (as with the CODE mentioned) : with different values for CODE the counters would be independent.
This leads me to the Right Solution for this problem: sequences. Set up several sequences, one for each "value group" you want to get IDs in their own range. See Section 9.15 of The Manual for the details of sequences and how to use them; it looks like they're a perfect fit for you. Sequences will never give the same value twice, but might skip values: if a transaction gets the value '2' from a sequence and aborts, the next transaction will get the value '3' rather than '2'.
The sequence answer is common, but might not be right. The viability of this solution depends on what you actually need. If what you semantically want is "some guaranteed to be unique number" then that is what a sequence is for. However, if what you want is to make sure that your value increases by exactly one on each insert (as you have asked), then DO NOT USE A SEQUENCE! I have run into this trap before myself. Sequences are not guaranteed to be sequential! They can skip numbers. Depending on what sort of optimizations you have configured, they can skip LOTS of numbers. Even if you have things configured just right so that you shouldn't skip any numbers, that is not guaranteed, and is not what sequences are for. So, you are only asking for trouble if you (mis)use them like that.
One step better solution is to bundle the select into the insert, like so:
INSERT INTO table(code, c, ...)
VALUES ('XX', (SELECT MAX(c) + 1 AS c FROM table WHERE code = 'XX'), ...);
(I haven't test run that query, but I'm pretty sure it should work. My apologies if it doesn't.) But, doing something like that reflects the semantic intent of what you are trying to do. However, this is inefficient, because you have to do a scan for MAX, and the inference I am taking from your sample is that you have a small number of code values relative to the size of the table, so you are going to do an expensive, full table scan on every insert. That isn't good. Also, this doesn't even get you the ACID guarantee you are looking for. The select is not transactionally tied to the insert. You can't "lock" the result of the MAX() function. So, you could still have two transactions running this query and they both do the sub-select and get the same max, both add one, and then both try to insert. It's a much smaller window, but you may still technically have a race condition here.
Ultimately, I would challenge that you probably have the wrong data model if you are trying to increment on insert. You should insert with a unique key, most commonly a sequence value (at least as an easy, surrogate key for any natural key). That gets the data safely inserted. Then, if you need a count of things, then have one table that stores your counts.
CREATE TABLE code_counts (
code VARCHAR(2), --or whatever
count NUMBER
);
If you really want to store the code count of each item as it is inserted, the separate count table also allows you to do so correctly, transactionally, like so:
UPDATE code_counts SET count = count + 1 WHERE code = 'XX' RETURNING count INTO :count;
INSERT INTO table(code, c, ...) VALUES ('XX', :count, ...);
COMMIT;
The key is that the update locks the counter table and reserves that value for you. Then your insert uses that value. And all of that is committed as one transactional change. You have to do this in a transaction. Having a separate count table avoids the full table scan of doing SELECT MAX().... In essense, what this does is re-implements a sequence, but it also guarantees you sequencial, ordered use.
Without knowing your whole problem domain and data model, it is hard to say, but abstracting your counts out to a separate table like this where you don't have to do a select max to get the right value is probably a good idea. Assuming, of course, that a count is what you really care about. If you are just doing logging or something where you want to make sure things are unique, then use a sequence, and a timestamp to sort by.
Note that I'm saying not to sort by a sequence either. Basically, never trust a sequence to be anything other than unique. Because when you get to caching sequence values on a multi-node system, your application might even consume them out of order.
This is why you should use the Serial datatype, which defers the lookup of C to the time of insert (which uses table locks i presume). You would then not specify C, but it would be generated automatically. If you need C for some intermediate calculation, you would need to save first, then read C and finally update with the derived values.
Edit: Sorry, I didn't read your whole question. What about solving your other problems with normalization? Just create a second table for each specific type (for each x where A='x'), where you have another auto increment. Manually edited sequences could be another column in the same table, which uses the generated sequence as a base (i.e if pk = 34 you can have another column mypk='34Changed').
You can create sequential collumn by using sequence as default value:
First, you have to create the sequence counter:
CREATE SEQUENCE SEQ_TABLE_1 START WITH 1 INCREMENT BY 1;
So, you can use it as default value:
CREATE TABLE T (
COD NUMERIC(10) DEFAULT NEXTVAL('SEQ_TABLE_1') NOT NULL,
collumn1...
collumn2...
);
Now you don't need to worry about sequence on inserting rows:
INSERT INTO T (collumn1, collumn2) VALUES (value1, value2);
Regards.

Resources