I've looked at forums, worked through tutorials and looked in the Rails guides but can't seem to find what I'm looking for.
I'm trying to create a security application for a company I work for and I am creating the report form. I would like the reports to be numbered with an ascending number but am not sure what to search for or do to accomplish this.
I am using PostgreSQL as my database for both dev and prod.
You could use the id column if it is set up as the primary key (as it normally is). It would be unique to every report, never repeat, and increment by one with every new report.
Or you can make a column just for the report # and use:
CREATE TABLE tablename (
colname SERIAL
);
this is equal to doing:
CREATE SEQUENCE tablename_colname_seq;
CREATE TABLE tablename (
colname integer NOT NULL DEFAULT nextval('tablename_colname_seq')
);
ALTER SEQUENCE tablename_colname_seq OWNED BY tablename.colname;
This will allow you to set the starting number via:
SELECT setval('colname', 42, false);
The false means the first number handed out will be 42, not 43. If left out or set to true it will return 43 as the first value in the sequence.
An important aspect of sequences in Postgres is that a new number is handed out every time, even if that row fails to stay because of a transaction failure. So you could end of with missing numbers in the sequence.
Be sure and read this: http://www.postgresql.org/docs/9.3/static/functions-sequence.html
If this is a problem you might just want to do all of this in Rails by doing something like:
next_number = Report.select(:id).order('id DESC').limit(1).pluck(:id).first + 1
and have a unique constraint on the column to ensure no duplicate numbers.
For other DB's see this: http://www.w3schools.com/sql/sql_autoincrement.asp
Related
is there any way to enable counting of rows that trigger modified in SQLite?
I know it is disabled https://www.sqlite.org/c3ref/changes.html and i understand why, but can i enable it somehow?
CREATE TABLE Users_data (
Id INTEGER PRIMARY KEY AUTOINCREMENT,
Deleted BOOLEAN DEFAULT (0),
Name STRING
);
CREATE VIEW Users AS
SELECT Id, Name
FROM Users_data
WHERE Deleted = 0;
CREATE TRIGGER UsersDelete2UsersData
INSTEAD OF DELETE
ON Users
FOR EACH ROW
BEGIN
UPDATE Users_data SET Deleted = 1 WHERE Id = OLD.Id;
END;
-- etc for insert & update
then delete from Users where Name like 'foo' /* doesnt even need 'Id = 1' */; works fine, but numbers of modified rows is, as documentation say, always zero.
(I cant modify my DAL to automatically add "where Deleted = 0", so backup plan is to have table Users_deleted and 'on delete' trigger on Users table without any view, but then i have to keep tracking FKs (for example, what to do when someone delete from FK table) and so on...)
Edit: Returned number is used for checking on database concurrency.
Edit2: To be more clear: As i say, I can not modify my DAL (Entity Framework 6), so the preferred answer should operate as follow pseudo code: int affectedRow = query("delete from Users where Name like 'foo';").Execute();
Its all about SQLite "trigger on view" behavior.
Use sqlite3_total_changes() instead:
This function returns the total number of rows inserted, modified or deleted by all INSERT, UPDATE or DELETE statements completed since the database connection was opened, including those executed as part of trigger programs.
Its imposible in sqlite3 (in 2015).
Basically I was looking for instead of trigger on view (as in question) with return function, which is not supported in sqlite.
By the way, postgresql (and i believe some others full db servers) can do it.
I am using slick 2.1.0. Oracle doesn't have a notion of auto increment attribute for a column, so how can I manage an insert via slick using a sequence.
e.g. I have a table & sequence as follows :
CREATE TABLE USER
( "USER_ID" NUMBER NOT NULL ENABLE,
"NAME" VARCHAR2(100) NOT NULL ENABLE,
"ADDRESS" VARCHAR2(1000) NOT NULL ENABLE
);
CREATE SEQUENCE USER_ID_SEQ MINVALUE 1 MAXVALUE 99999999999999 INCREMENT BY 2;
How can I use this sequence to set my USER_ID? Also setting autoIncLastAsOption = true in Slicks's SourceCodeGenerator doesnt seem to help. My IDs are still not an Option[].
Here are some of the options suggested by Typesafe Developer:
If you don’t mind letting Slick manage the DDL, you can use O.AutoInc with OracleDriver. It will automatically create a backing sequence for the generated identity values. The way this works is by installing a trigger that automatically populates the ID from the sequence. Here’s the code that Slick generates for an AutoInc column on Oracle:
create sequence $seq start with 1 increment by 1;
create or replace trigger $trg before insert on $tab referencing new as new for each row when (new.$col is null) begin select $seq.nextval into :new.$col from sys.dual;
end;
where $seq, $trg, $col and $tab are the names of the sequence, trigger, identity column and table.
There is no special code being run during an actual insert operation. So if you already have a database schema with an identity sequence, you can manually create a trigger as shown above and mark the column as O.AutoInc in Slick to get the standard handling for auto-incrementing columns.
If you want a solution without a trigger, you could you insertExpr for inserting in Slick. This allows computed expressions, like using Slick’s own sequence API (which is supported by OracleDriver), but unlike a normal insert you do not get all features, convenience and performance (e.g. batch inserts and pre-compiled inserts).
The downside is that this can’t be precompiled (but compiling a simple expression of a few scalar values should be relatively cheap) and you can’t just insert a mapped cased class that way without some extra mapping boilerplate.
Another option would be to first get a new id (or even multiple ids for a batch insert) from the sequence with one query, put them into the data transfer objects, and then insert those normally with the ids in place. This requires one extra query per batch (for first fetching the ids) but you can easily use mapped objects and precompile everything.
I have a - for me unknown - issue and I don't know what's the logic/cause behind it. When I try to insert a record in a table I get a DB2 error saying:
[SQL0803] Duplicate key value specified: A unique index or unique constraint *N in *N
exists over one or more columns of table TABLEXXX in SCHEMAYYY. The operation cannot
be performed because one or more values would have produced a duplicate key in
the unique index or constraint.
Which is a quite clear message to me. But actually there would be no duplicate key if I inserted my new record seeing what records are already in there. When I do a SELECT COUNT(*) from SCHEMAYYY.TABLEXXX and then try to insert the record it works flawlessly.
How can it be that when performing the SELECT COUNT(*) I can suddenly insert the records? Is there some sort of index associated with it which might give issues because it is out of sync? I didn't design the data model, so I don't have deep knowledge of the system yet.
The original DB2 SQL is:
-- Generate SQL
-- Version: V6R1M0 080215
-- Generated on: 19/12/12 10:28:39
-- Relational Database: S656C89D
-- Standards Option: DB2 for i
CREATE TABLE TZVDB.PRODUCTCOSTS (
ID INTEGER GENERATED BY DEFAULT AS IDENTITY (
START WITH 1 INCREMENT BY 1
MINVALUE 1 MAXVALUE 2147483647
NO CYCLE NO ORDER
CACHE 20 )
,
PRODUCT_ID INTEGER DEFAULT NULL ,
STARTPRICE DECIMAL(7, 2) DEFAULT NULL ,
FROMDATE TIMESTAMP DEFAULT NULL ,
TILLDATE TIMESTAMP DEFAULT NULL ,
CONSTRAINT TZVDB.PRODUCTCOSTS_PK PRIMARY KEY( ID ) ) ;
ALTER TABLE TZVDB.PRODUCTCOSTS
ADD CONSTRAINT TZVDB.PRODCSTS_PRDCT_FK
FOREIGN KEY( PRODUCT_ID )
REFERENCES TZVDB.PRODUCT ( ID )
ON DELETE RESTRICT
ON UPDATE NO ACTION;
I'd like to see the statements...but since this question is a year old...I won't old my breath.
I'm thinking the problem may be the
GENERATED BY DEFAULT
And instead of passing NULL for the identity column, you're accidentally passing zero or some other duplicate value the first time around.
Either always pass NULL, pass a non-duplicate value or switch to GENERATED ALWAYS
Look at preceding messages in the joblog for specifics as to what caused this. I don't understand how the INSERT can suddenly work after the COUNT(*). Please let us know what you find.
Since it shows *N (ie n/a) as the name of the index or constraing, this suggests to me that is is not a standard DB2 object, and therefore may be a "logical file" [LF] defined with DDS rather than SQL, with a key structure different than what you were doing your COUNT(*) on.
Your shop may have better tools do view keys on dependent files, but the method below will work anywhere.
If your table might not be the actual "physical file", check this using Display File Description, DSPFD TZVDB.PRODUCTCOSTS, in a 5250 ("green screen") session.
Use the Display Database Relations command, DSPDBR TZVDB.PRODUCTCOSTS, to find what files are defined over your table. You can then DSPFD on each of these files to see the definition of the index key. Also check there that each of these indexes is maintained *IMMED, rather than *REBUILD or *DELAY. (A wild longshot guess as to a remotely possible cause of your strange anomaly.)
You will find the DB2 for i message finder here in the IBM i 7.1 Information Center or other releases
Is it a paging issue? we seem to get -0803 on inserts occasionally when a row is being held for update and it locks a page that probably contains the index that is needed for the insert? This is only a guess but it appears to me that is what is happening.
I know it is an old topic, but this is what Google shown me on the first place.
I had the same issue yesterday, causing me a lot of headache. I did the same as above, checked the table definitions, keys, existing items...
Then I found out the problem was with my INSERT statement. It was trying to insert to identical records at once, but as the constraint prevented the commit, I could not find anything in the database.
Advice: review your INSERT statement carefully! :)
I'd need advice on following situation with Oracle/PostgreSQL:
I have a db table with a "running counter" and would like to protect it in the following situation with two concurrent transactions:
T1 T2
SELECT MAX(C) FROM TABLE WHERE CODE='xx'
-- C for new : result + 1
SELECT MAX(C) FROM TABLE WHERE CODE='xx';
-- C for new : result + 1
INSERT INTO TABLE...
INSERT INTO TABLE...
So, in both cases, the column value for INSERT is calculated from the old result added by one.
From this, some running counter handled by the db would be fine. But that wouldn't work because
the counter values or existing rows are sometimes changed
sometimes I'd like there to be multiple counter "value groups" (as with the CODE mentioned) : with different values for CODE the counters would be independent.
With some other databases this can be handled with SERIALIZABLE isolation state but at least with Oracle&Postgre the phantom reads are prevented but as the result the table ends up with two distinct rows with same counter value. This seems to have to do with the predicate locking, locking "all the possible rows covered by the query" - some other db:s end up to lock the whole table or something..
SELECT ... FOR UPDATE -statements seem to be for other purposes and don't even seem to work with MAX() -function.
Setting an UNIQUE contraint on the column would probably be the solution but are there some other ways to prevent the situation?
b.r. Touko
EDIT: One more option could probably be manual locking even though it doesn't appear nice to me..
Both Oracle and PostgreSQL support what's called sequences and the perfect fit for your problem. You can have a regular int column, but define one sequence per group, and do a single query like
--PostgreSQL
insert into table (id, ... ) values (nextval(sequence_name_for_group_xx), ... )
--Oracle
insert into table (id, ... ) values (sequence_name_for_group_xx.nextval, ... )
Increments in sequences are atomic, so your problem just wouldn't exist. It's only a matter of creating the required sequences, one per group.
the counter values or existing rows are sometimes changed
You should to put a unique constraint on that column if this would be a problem for your app. Doing so would guarantee a transaction at SERIALIZABLE isolation level would abort if it tried to use the same id as another transaction.
One more option could probably be manual locking even though it doesn't appear nice to me..
Manual locking in this case is pretty easy: just take a SHARE UPDATE EXCLUSIVE or stronger lock on the table before selecting the maximum. This will kill concurrent performance, though.
sometimes I'd like there to be multiple counter "value groups" (as with the CODE mentioned) : with different values for CODE the counters would be independent.
This leads me to the Right Solution for this problem: sequences. Set up several sequences, one for each "value group" you want to get IDs in their own range. See Section 9.15 of The Manual for the details of sequences and how to use them; it looks like they're a perfect fit for you. Sequences will never give the same value twice, but might skip values: if a transaction gets the value '2' from a sequence and aborts, the next transaction will get the value '3' rather than '2'.
The sequence answer is common, but might not be right. The viability of this solution depends on what you actually need. If what you semantically want is "some guaranteed to be unique number" then that is what a sequence is for. However, if what you want is to make sure that your value increases by exactly one on each insert (as you have asked), then DO NOT USE A SEQUENCE! I have run into this trap before myself. Sequences are not guaranteed to be sequential! They can skip numbers. Depending on what sort of optimizations you have configured, they can skip LOTS of numbers. Even if you have things configured just right so that you shouldn't skip any numbers, that is not guaranteed, and is not what sequences are for. So, you are only asking for trouble if you (mis)use them like that.
One step better solution is to bundle the select into the insert, like so:
INSERT INTO table(code, c, ...)
VALUES ('XX', (SELECT MAX(c) + 1 AS c FROM table WHERE code = 'XX'), ...);
(I haven't test run that query, but I'm pretty sure it should work. My apologies if it doesn't.) But, doing something like that reflects the semantic intent of what you are trying to do. However, this is inefficient, because you have to do a scan for MAX, and the inference I am taking from your sample is that you have a small number of code values relative to the size of the table, so you are going to do an expensive, full table scan on every insert. That isn't good. Also, this doesn't even get you the ACID guarantee you are looking for. The select is not transactionally tied to the insert. You can't "lock" the result of the MAX() function. So, you could still have two transactions running this query and they both do the sub-select and get the same max, both add one, and then both try to insert. It's a much smaller window, but you may still technically have a race condition here.
Ultimately, I would challenge that you probably have the wrong data model if you are trying to increment on insert. You should insert with a unique key, most commonly a sequence value (at least as an easy, surrogate key for any natural key). That gets the data safely inserted. Then, if you need a count of things, then have one table that stores your counts.
CREATE TABLE code_counts (
code VARCHAR(2), --or whatever
count NUMBER
);
If you really want to store the code count of each item as it is inserted, the separate count table also allows you to do so correctly, transactionally, like so:
UPDATE code_counts SET count = count + 1 WHERE code = 'XX' RETURNING count INTO :count;
INSERT INTO table(code, c, ...) VALUES ('XX', :count, ...);
COMMIT;
The key is that the update locks the counter table and reserves that value for you. Then your insert uses that value. And all of that is committed as one transactional change. You have to do this in a transaction. Having a separate count table avoids the full table scan of doing SELECT MAX().... In essense, what this does is re-implements a sequence, but it also guarantees you sequencial, ordered use.
Without knowing your whole problem domain and data model, it is hard to say, but abstracting your counts out to a separate table like this where you don't have to do a select max to get the right value is probably a good idea. Assuming, of course, that a count is what you really care about. If you are just doing logging or something where you want to make sure things are unique, then use a sequence, and a timestamp to sort by.
Note that I'm saying not to sort by a sequence either. Basically, never trust a sequence to be anything other than unique. Because when you get to caching sequence values on a multi-node system, your application might even consume them out of order.
This is why you should use the Serial datatype, which defers the lookup of C to the time of insert (which uses table locks i presume). You would then not specify C, but it would be generated automatically. If you need C for some intermediate calculation, you would need to save first, then read C and finally update with the derived values.
Edit: Sorry, I didn't read your whole question. What about solving your other problems with normalization? Just create a second table for each specific type (for each x where A='x'), where you have another auto increment. Manually edited sequences could be another column in the same table, which uses the generated sequence as a base (i.e if pk = 34 you can have another column mypk='34Changed').
You can create sequential collumn by using sequence as default value:
First, you have to create the sequence counter:
CREATE SEQUENCE SEQ_TABLE_1 START WITH 1 INCREMENT BY 1;
So, you can use it as default value:
CREATE TABLE T (
COD NUMERIC(10) DEFAULT NEXTVAL('SEQ_TABLE_1') NOT NULL,
collumn1...
collumn2...
);
Now you don't need to worry about sequence on inserting rows:
INSERT INTO T (collumn1, collumn2) VALUES (value1, value2);
Regards.
I have a number of tables that use the trigger/sequence column to simulate auto_increment on their primary keys which has worked great for some time.
In order to speed the time necessary to perform regression testing against software that uses the db, I create control files using some sample data, and added running of these to the build process.
This change is causing most of the tests to crash though as the testing process installs the schema from scratch, and the sequences are returning values that already exist in the tables. Is there any way to programtically say "Update sequences to max value in column" or do I need to write out a whole script by hand that updates all these sequences, or can I/should I change the trigger that substitutes the null value for the sequence to some how check this (though I think this might cause the mutating table problem)?
You can generate a script to create the sequences with the start values you need (based on their existing values)....
SELECT 'CREATE SEQUENCE '||sequence_name||' START WITH '||last_number||';'
FROM ALL_SEQUENCES
WHERE OWNER = your_schema
(If I understand the question correctly)
Here's a simple way to update a sequence value - in this case setting the sequence to 1000 if it is currently 50:
alter sequence MYSEQUENCE increment by 950 nocache;
select MYSEQUENCE_S.nextval from dual;
alter sequence MYSEQUENCE increment by 1;
Kudos to the creators of PL/SQL Developer for including this technique in their tool.
As part of your schema rebuild, why not drop and recreate the sequence?