Sequel adding a "returning null" to my inserts. How do I disable it? - ruby

I'm using Ruby Sequel (ORM gem) to connect to a Postgres database. I'm not using any models. My insert statements seem to have a "returning null" appended to them automatically (and thusly won't return the newly inserted row id/pk). What's the use of this? And why is this the default? And more importantly, how do I disable it (connection wide)?
Also, I noticed there's a dataset.returning method but it doesn't seem to work!
require 'sequel'
db = Sequel.connect 'postgres://user:secret#localhost/foo'
tbl = "public__bar".to_sym #dynamically generated by the app
dat = {x: 1, y: 2}
id = db[tbl].insert(dat) #generated sql -- INSERT INTO "public"."bar" ("x", "y") VALUES (1, 2) RETURNING NULL
Don't know if it matters but the table in question is inherited (using postgres table inheritance)
ruby 1.9.3p392 (2013-02-22) [i386-mingw32]
sequel (3.44.0)
--Edit 1 -- After a bit of troubleshooting--
Looks like the table inheritance COULD BE the problem here. Sequel seems to run a query automatically to determine the pk of a table (in my case the pk's defined on a table up the chain), not finding which, perhaps the "returning null" is being appended?
SELECT pg_attribute.attname AS pk FROM pg_class, pg_attribute, pg_index, pg_namespace WHERE pg_class.oid = pg_attribute.attrelid AND pg_class.relnamespace = pg_namespace.oid AND
pg_class.oid = pg_index.indrelid AND pg_index.indkey[0] = pg_attribute.attnum AND pg_index.indisprimary = 't' AND pg_class.relname = 'bar'
AND pg_namespace.nspname = 'public'
--Edit 2--
Yup, looks like that's the problem!

If you are using PostgreSQL inheritance please note that the following are not inherited:
Primary Keys
Unique Constraints
Foreign Keys
In general you must declare these on each child table. Do for example:
CREATE TABLE my_parent (
id bigserial primary key,
my_value text not null unique
);
CREATE TABLE my_child() INHERITS (my_parent);
INSERT INTO my_child(id, my_value) values (1, 'test');
INSERT INTO my_child(id, my_value) values (1, 'test'); -- works, no error thrown
What you want instead is to do this:
CREATE TABLE my_parent (
id bigserial primary key,
my_value text not null unique
);
CREATE TABLE my_child(
primary key(id),
unique(my_value)
) INHERITS (my_parent);
INSERT INTO my_child(id, my_value) values (1, 'test');
INSERT INTO my_child(id, my_value) values (1, 'test'); -- unique constraint violation thrown
This sounds to me like you have some urgent DDL issues to fix.
You could retrofit the second's constraints onto the first with:
ALTER TABLE my_child ADD PRIMARY KEY(id);
ALTER TABLE my_child ADD UNIQUE (my_value);

Related

Procedure to remove duplicates in a table

Brief model overview:
I have a student and a course tables. As it's many to many relation there is also a junction table student_course (id_student, id_course), with unique constraint on both columns (composite).
The problem I want to solve:
On account of a mistake, there is no a unique constraint on the code column of the course table. It should as code column should uniquely identify a course. As a result there are two rows in the course table with the same value in the code column. I want to remove that duplicate, check that there is no other duplicates and add a unique constraint on the code column. Without loosing relations with student table.
My approach to solve the issue:
I have create a procedure that should do what I want.
CREATE OR REPLACE PROCEDURE REMOVE_COURSES
(
v_course_code IN VARCHAR2,
v_course_price IN VARCHAR2
)
AS
new_course_id NUMBER;
BEGIN
INSERT INTO course (CODE, PRICE) VALUES (v_course_code, v_course_price)
RETURNING ID INTO new_course_id;
FOR c_course_to_overwrite IN (SELECT *
FROM course
WHERE code = v_course_code AND id != new_course_id) LOOP
UPDATE student_course SET id_course = new_course_id WHERE id_course = c_course_to_overwrite.id;
DELETE FROM course WHERE id = c_course_to_overwrite.id;
END LOOP;
END REMOVE_COURSES;
/
Main problem I want to solve:
The procedure keeps giving me an error about unique constraint violation on student_course table. But I am really not sure how it's possible as I am using new_course_id, so there is no chance that in the junction table there are two rows with the same id_student, id_course. What do I need to fix ?
Miscellaneous:
I want to solve that issue using procedure only for learning purposes
EDITED:
CREATE TABLE student (
id NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY,
name VARCHAR2(150) NOT NULL,
PRIMARY KEY (id)
);
ALTER TABLE student MODIFY ID
GENERATED BY DEFAULT ON NULL AS IDENTITY (START WITH LIMIT VALUE);
CREATE TABLE course (
id NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY,
code VARCHAR2(255) NOT NULL,
PRIMARY KEY (id)
);
ALTER TABLE course MODIFY ID
GENERATED BY DEFAULT ON NULL AS IDENTITY (START WITH LIMIT VALUE);
CREATE TABLE student_course (
id_student NUMBER NOT NULL,
id_course NUMBER NOT NULL,
PRIMARY KEY (id_student, id_course),
CONSTRAINT student_fk FOREIGN KEY (id_student) REFERENCES student (id),
CONSTRAINT course_fk FOREIGN KEY (id_course) REFERENCES course (id)
);
insert into student (name) values ('John');
INSERT INTO course (ID, CODE) VALUES (1, 'C_13');
INSERT INTO course (ID, CODE) VALUES (2, 'C_13');
commit;
INSERT INTO STUDENT_COURSE (ID_STUDENT, ID_COURSE) VALUES (1, 1);
INSERT INTO STUDENT_COURSE (ID_STUDENT, ID_COURSE) VALUES (1, 2);
commit;
CALL REMOVE_COURSES('C_13');
[23000][1] ORA-00001: unique constraint (SYS_C0014983) violated ORA-06512: near "REMOVE_COURSES", line 8
Rather than removing one of the duplicate codes, you're creating a third course with the same code, and trying to move all students on either of the old courses onto the new one. The error suggests you have students who are already enrolled on both of the old courses.
Your cursor loop query is:
SELECT *
FROM course
WHERE code = v_course_code AND id != new_course_id
That will find all junction records for both old versions of the code, and the update then sets all of those junction records to the same new ID.
If there are any students listed against both old IDs for the code - which would be allowed by your composite unique key - then they will both be updated to the same new ID.
So say the courses you're looking at are [updated for your example code]:
ID CODE
-- ----
1 C_13
2 C_13
and you have junction records for a student for both courses, like:
ID_STUDENT ID_COURSE
---------- ---------
1 1
1 2
You are creating a new course:
ID CODE
-- ----
3 C_13
Your cursor loop looks for code = 'ABC' and ID != 3, which finds IDs 1 and 2. So in the first iteration of the loop up update the rows with ID 1, so now you have:
ID_STUDENT ID_COURSE
---------- ---------
1 3
1 2
Then in the second iteration you try to update the rows with ID 2, which would attempt to produce:
ID_STUDENT ID_COURSE
---------- ---------
1 3
1 3
which would break the unique constraint - hence the error.
You probably don't want to create a new course at all, but either way, you need to remove duplicate records from student_course - that is, rows which will become duplicates when updated. Basically you need to find students with entries for both existing course IDs, and delete either of them. If you don't care which this would do it:
delete from student_course sc1
where id_course in (
select id
from course
where code = 'C_13'
)
and exists (
select null
from student_course sc2
join course c on c.id = sc.id_course
where sc2.id_student = sc1.id_student
and sc2.id_course > sc1.id_course
and c.code = 'C_13'
);
but there are other (probably better) ways.
You then have the choice of updating all remaining junction records for both old IDs to your new ID; or to consolidate on one of the old IDs and remove the other.
(Your question implies you want to solve the overall task yourself, so I'll refrain from trying to provide a complete solution - this just hopefully helps you understand and resolve your main problem...)

Filtering child table in Db Adapter on Oracle Service Bus

I have a self-related table containing both active and historical data (field status holding 'A'(ctive) or 'H'(istorical) )
I need to create a service returning active records with all their active children.
I may add a condition to the main query but can not affect the "many" part of one-to-many relation: historical records are also retrieved. Is it possible to implement it without creating a pipeline looping through the service based on table with no relation? In pure eclipselink this may be achieved by utilizing DescriptorCustomizer, but I don't know whether this is valid solution for OSB.
Also I can not create a database view containing only active records.
BTW I'm on 12.2.1.1
Example table structure and data (for Oracle):
create table SELF_REL_TAB
(
ID number not null,
PARENT_ID number,
STATUS varchar2(1)
);
comment on column SELF_REL_TAB.ID
is 'Primary key';
comment on column SELF_REL_TAB.PARENT_ID
is 'Self reference';
comment on column SELF_REL_TAB.STATUS
is 'Status A(ctive) H(istorical)';
alter table SELF_REL_TAB
add constraint SRT_PK primary key (ID);
alter table SELF_REL_TAB
add constraint SRT_SRT_FK foreign key (PARENT_ID)
references SELF_REL_TAB (ID);
alter table SELF_REL_TAB
add constraint srt_status_chk
check (STATUS IN ('A','H'));
INSERT INTO SELF_REL_TAB VALUES (1, NULL, 'A');
INSERT INTO SELF_REL_TAB VALUES (2, 1, 'A');
INSERT INTO SELF_REL_TAB VALUES (3, 1, 'H');
Maybe you solved it, but you can use connect by clause to do that.
select
lpad(' ', 2*level) || id
from
self_rel_tab
where status = 'A'
start with
parent_id is null
connect by
prior id=parent_id
JP

Oracle Parent Table with 2 Possible Children Tables [duplicate]

I have requirement for a web app that states that a user should be able to either upload an instruction document(s) (.pdf, .doc, .txt) or provide text for the instructions. The user can upload a document and provide text, or they can do one-or-the-other, but they have to do something (not nullable). How would this be designed in a database? Would this be considered a complete sub-type (see below)?
This is tiny part of a larger schema, so I just posted what I felt was necessary for this particular question.
Ypercube's answer is fine, except this can, in fact, be done purely through declarative integrity while keeping separate tables. The trick is to combine deferred circular FOREIGN KEYs with a little bit of creative denormalization:
CREATE TABLE Instruction (
InstructionId INT PRIMARY KEY,
TextId INT UNIQUE,
DocumentId INT UNIQUE,
CHECK (
(TextId IS NOT NULL AND InstructionId = TextId)
OR (DocumentId IS NOT NULL AND InstructionId = DocumentId)
)
);
CREATE TABLE Text (
InstructionId INT PRIMARY KEY,
FOREIGN KEY (InstructionId) REFERENCES Instruction (TextId) ON DELETE CASCADE
);
CREATE TABLE Document (
InstructionId INT PRIMARY KEY,
FOREIGN KEY (InstructionId) REFERENCES Instruction (DocumentId) ON DELETE CASCADE
);
ALTER TABLE Instruction ADD FOREIGN KEY (TextId) REFERENCES Text DEFERRABLE INITIALLY DEFERRED;
ALTER TABLE Instruction ADD FOREIGN KEY (DocumentId) REFERENCES Document DEFERRABLE INITIALLY DEFERRED;
Inserting Text is done like this:
INSERT INTO Instruction (InstructionId, TextId) VALUES (1, 1);
INSERT INTO Text (InstructionId) VALUES (1);
COMMIT;
Inserting Document like this:
INSERT INTO Instruction (InstructionId, DocumentId) VALUES (2, 2);
INSERT INTO Document (InstructionId) VALUES (2);
COMMIT;
And inserting both Text and Document like this:
INSERT INTO Instruction (InstructionId, TextId, DocumentId) VALUES (3, 3, 3);
INSERT INTO Text (InstructionId) VALUES (3);
INSERT INTO Document (InstructionId) VALUES (3);
COMMIT;
However, trying to insert Instruction alone fails on commit:
INSERT INTO Instruction (InstructionId, TextId) VALUES (4, 4);
COMMIT; -- Error (FOREIGN KEY violation).
Attempting to insert the "mismatched type" also fails on commit:
INSERT INTO Document (InstructionId) VALUES (1);
COMMIT; -- Error (FOREIGN KEY violation).
And of course, trying to insert bad values into Instruction fails (this time before commit):
INSERT INTO Instruction (InstructionId, TextId) VALUES (5, 6); -- Error (CHECK violation).
INSERT INTO Instruction (InstructionId) VALUES (7); -- Error (CHECK violation).
I think that this cannot be done with Declarative Referential Integrity alone - not if your design has these 3 separate tables.
You'll have to ensure that all Insert/Delete/Update operations are done within transactions (stored procedures) that enforce such a requirement - so no row is ever inserted or left in table Instruction without a relative row in either one of the 2 other tables.
If you don't mind having nullable fields, you could merge the 3 tables into one and use a CHECK constraint:
CREATE TABLE Instruction
( InstructionID INT NOT NULL
, Text VARCHAR(255) NULL
, Filepath VARCHAR(255) NULL
, PRIMARY KEY (InstructionID)
, CONSTRAINT Instruction_has_either_text_or_document
CHECK (Text IS NOT NULL OR FilePath IS NOT NULL)
) ;
If a user submitted text, could your application save it as a .txt file? This way you would only have to worry about dealing with files.
Something feels a bit off here
There is no UserID in this schema, so it should be added to the
Instruction table.
If a user does not upload anything, there will (should) be no entry
for that user in the Instruction table.
So the problem -- as stated -- is not about placing constraints
on these three tables.
When loading this structure, use a stored procedure and/or a transaction -- to make sure that at least one of the child record gets populated. Though, this has nothing to do with the business requirement that user has to upload something.

Inserting an empty row

This is so simple it has probably already been asked, but I couldn't find it (if that's the case I'm sorry for asking).
I would like to insert an empty row on a table so I can pick up its ID (primary key, generated by an insert trigger) through an ExecuteScalar. Data is added to it at a later time in my code.
My question is this: is there a specific insert syntax to create an empty record? or must I go with the regular insert syntax such as "INSERT INTO table (list all the columns) values (null for every column)"?
Thanks for the answer.
UPDATE: In Oracle, ExecuteScalar on INSERT only returns 0. The final answer is a combination of what was posted below. First you need to declare a parameter, and pick up it up with RETURNING.
INSERT INTO TABLENAME (ID) VALUES (DEFAULT) RETURNING ID INTO :parameterName
Check this out link for more info.
You would not have to specify every single column, but you may not be able to create an "empty" record. Check for NOT NULL constraints on the table. If none (not including the Primary Key constraint), then you would only need to supply one column. Like this:
insert into my_table ( some_column )
values ( null );
Do you know about the RETURNING clause? You can return that PK back to your calling application when you do the INSERT.
insert into my_table ( some_column )
values ( 'blah' )
returning my_table_id into <your_variable>;
I would question the approach though. Why create an empty row? That would/could mean there are no constraints on that table, a bad thing if you want good, clean, data.
Basically, in order to insert a row where values for all columns are NULL except primary
key column's value you could execute a simple insert statement:
insert into your_table(PK_col_name)
values(1); -- 1 for instance or null
The before insert trigger, which is responsible for populating primary key column will
override the value in the values clause of the insert statement leaving you with an
empty record except PK value.

Is there a way similar check constraint to help me to know if there is a duplicate column

table 1
ID - name - main_number - random1 - random2
1* -aaaa-blalablabla*- *** - *
2 -vvvv-blublubluuu*- *** - *
3 -aaaa-blalablabla*- *** - **
ID , name and main number are primary key
My problem that I have noticed coulmn name and main number has duplicate values, i dont want to ADD ANY OTHER DUPLICATE VALUES ( I should keep the old duplicat because in my real table there are a lot of duplicated data and its hard to remove them )
what I want when I TRY ( BEFORE TO COMMIT) to know that this name I am trying to insert is duplicate.
I can do that with in a procedure or triger, but i have heard constraint checking is simpler and easier(if there a simpler way then procedure or triger ill be glad to learn it)
CONSTRAINT check_name
CHECK (name = (A_name))
can the constaraint have more then 1 column in such way?
CONSTRAINT check_name
CHECK (name = (A_name) , main_number=( A_number))
can I a write a constaraint in such way?
CONSTRAINT check_name
CHECK (name = ( select case where there is an column has the same value of column name))
So my question : Is there a way simelar to check constraint to help me to know if there is a duplicate column or I have to use a trigger ?
Since your database is Oracle you could also use NOVALIDATE constraints. Meaning: "doesn't matter how the data is, just validate from now on".
create table tb1
(field1 number);
insert into tb1 values (1);
insert into tb1 values (1);
insert into tb1 values (1);
insert into tb1 values (2);
insert into tb1 values (2);
commit;
-- There should be an non-unique index first
create index idx_t1 on tb1 (field1);
alter table tb1 add constraint pk_t1 primary key(field1) novalidate;
-- If you try to insert another 1 or 2 you would get an error
insert into tb1 values (1);
Yes, you can use constraints on many columns.
But in this case constraint is not applicable, because all table rows must satisfy constraints. Use a trigger.
Constraints cannot contain subqueries.
Alternatively use unique index, that will enforce unique constraint
create unique index index1 on table1
(case when ID <= XXX then null else ID end,
case when ID <= XXX then null else name end);
Replace 'XXX' with your current max(ID).
I assume that you want to prevent duplicate records as defined by the combination of name and main_number.
Then the way to go is to cleanup your database, and create a unique index:
create unique index <index_name> on <table> (name, main_number)
This both checks, and speed's it up.
In theory, if you really wanted to keep the old duplicate records, you could get along by using a trigger, but then you will have a hard time trying to get sense out of this data.
Update
If you used the trigger, you would end up with two partitions of data in one table - one is checked, the other is not. So all of your queries must pay attention to it. You just delay your problem.
So either clean it up (by deleting or merging) or move the old data in a separate table.
You can use SQL select ... group by to find your duplicates, so you can delete/move them in one turn.

Resources