SQLite claims duplicate rows on insert whereas none can be found - ruby

I have a table in a SQLite database created with the code below. Note the compound primary key:
db.create_table(:person_hash) do
Integer :person_id
Bignum :hash // MD5 hash in hex stored as numeric: hash.to_i(16)
primary_key [:person_id, :hash]
end
This table has some rows already:
puts db[:person_hash].where(:person_id => 285577).all
# {:person_id=>285577, :hash=>306607097659338192312932577746542919680}
Now, when I try to insert this:
db[:person_hash].insert({:person_id=>285577, :hash=>306607097659338206333361532286405644297})
I get this:
SQLite3::ConstraintException: columns person_id, hash are not unique (Sequel::DatabaseError)
If the row does not already exist in the table, how can it be a duplicate?
I tried inserting another hash for the same person ID instead, and it worked without problems.

This appears to be a bug in SQLite:
$ sqlite3
SQLite version 3.8.9 OpenBSD
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> CREATE TABLE person_hash (person_id integer, hash bigint, primary key (person_id, hash));
sqlite> INSERT INTO person_hash VALUES (285577, 306607097659338192312932577746542919680);
sqlite> INSERT INTO person_hash VALUES (285577, 306607097659338206333361532286405644297);
Error: UNIQUE constraint failed: person_hash.person_id, person_hash.hash

Related

How to add a composite foreign key in liquibase?

I've been struggling for some time now to figure out a way to create a composite foreign key in liquibase.
I have a table A which has a composite PK, let's say (id1, id2). I'm trying to make another table B, in which the A.PK is mapped as a FK.
I'm using liquibase with YAML and something doesn't seem to add up.
I've tried adding the FK when creating the table (so in the column tag)
- column:
name: id1_id2
type: int
constraints:
nullable: false
foreignKeyName: fk_id1_id2
references: A(id1, id2)
Unfortunately this syntax returns an error:
Caused by: java.sql.SQLSyntaxErrorException: ORA-02256: number of referencing columns must match referenced columns
Another thing that I've tried is creating the table first, with the column for the desired FK and try to add a FK constraint on that column. This doesn't throw any error but it does nothing (also the log for LB says "empty" in the description)
changes:
- addForeignKeyContraint:
baseColumnNames: id1, id2
baseTableName: B
constraintName: fb_id1_id2
referencedColumnNames: id1, id2
referencedTableName: A
Any help would be much appreciated.
Thanks
Have you tried addForeignKeyConstraint? Something like this:
- changeSet:
id: 1
author: you
changes:
- addForeignKeyConstraint:
baseColumnNames: id1, id2
baseTableName: tableB
constraintName: FK_tableB_tableA
referencedColumnNames: id1, id2
referencedTableName: tableA
I don't use Liquibase, but here's how it is supposed to look like, as far as Oracle is concerned: if you want to create a composite foreign key (in the detail table), then it has to reference a composite primary key (in the master table).
Have a look at this example:
SQL> create table master
2 (id_1 number,
3 id_2 number,
4 constraint pk_mas primary key (id_1, id_2));
Table created.
SQL> create table detail
2 (id_det number constraint pk_det primary key,
3 --
4 id_1 number,
5 id_2 number,
6 constraint fk_det_mas foreign key (id_1, id_2) references master (id_1, id_2));
Table created.
SQL>
It just wouldn't work otherwise; that's why you got the error
ORA-02256: number of referencing columns must match referenced columns
because your detail table contained a single column (id1_id2) and tried to reference two columns in table A (id1, id2).

Is there any order of columns while creating Hive table that needs to be pairtitioned dynamically?

I am trying to load an RDBMS table into Hive. I need to partition the table dynamically based on a column data. I have the schema of the Greenplum table as below:
forecast_id:bigint
period_year:numeric(15,0)
period_num:numeric(15,0)
period_name:character varying(15)
drm_org:character varying(10)
ledger_id:bigint
currency_code:character varying(15)
source_system_name:character varying(30)
source_record_type:character varying(30)
xx_last_update_log_id:integer
xx_data_hash_code:character varying(32)
xx_data_hash_id:bigint
xx_pk_id:bigint
When I checked for the schema of the same table on Hive (which is usually replicated on Hive), I did describe extended tablename and got the below schema:
forecast_id bigint
period_year bigint
period_num bigint
period_name string
drm_org string
ledger_id bigint
currency_code string
source_record_type string
xx_last_update_log_id int
xx_data_hash_code string
xx_data_hash_id bigint
xx_pk_id bigint
source_system_name String
so I asked my lead why is the column: source_system_name given at the end in Hive table and I got an answer: "The columns that are used to partition the hive table dynamically, comes at the end of the table"
Is it true that the columns on which the hive table is dynamically partitioned should come at the end of the schema ?
The order of the columns matter when you are dynamic partition in Hive. You can find more details here. From the documentation
In INSERT ... SELECT ... queries, the dynamic partition columns must
be specified last among the columns in the SELECT statement and in the
same order in which they appear in the PARTITION() clause.

ruby sqlite3 insert with autoincrement

I have a trouble inserting data into a sqlite3 database with an id autoincrement. Here is what I have tried so far:
begin
db = SQLite3::Database.open db_name
db.execute "CREATE TABLE IF NOT EXISTS Audit(
id INTEGER PRIMARY KEY AUTOINCREMENT,
module TEXT,
hostname TEXT,
criticity TEXT,
raw_data TEXT
);"
rescue SQLite3::Exception => e
puts e.backtrace
ensure
db.close if db
end
And I insert data like this:
db.execute("INSERT INTO Audit VALUES (module,hostname,criticity,raw_data)",
check,
hostname,
raw_data,
criticity
)
Here is the error:
#<SQLite3::SQLException: table Audit has 5 columns but 4 values were supplied>
I don't know how to supply the id values as it should be auto-incremented
That's an odd error I get SQLite3::SQLException: no such column: module
I think you have the wrong format you could do this format try,
db.execute("INSERT INTO Audit (module,hostname,criticity,raw_data) VALUES (?,?,?,?) ", 'module', 'host', 'criticity', "\x01\x00\x00\x00\x01")
Results:
sqlite> SELECT * FROM Audit;
1|module|host|criticity|☺
Just as guess try insert null value.
db.execute("INSERT INTO Audit VALUES (id, module,hostname,criticity,raw_data)",
null,
check,
hostname,
raw_data,
criticity
)
Also look at https://www.sqlite.org/faq.html#q1 it states a A column declared INTEGER PRIMARY KEY will autoincrement

Sequel adding a "returning null" to my inserts. How do I disable it?

I'm using Ruby Sequel (ORM gem) to connect to a Postgres database. I'm not using any models. My insert statements seem to have a "returning null" appended to them automatically (and thusly won't return the newly inserted row id/pk). What's the use of this? And why is this the default? And more importantly, how do I disable it (connection wide)?
Also, I noticed there's a dataset.returning method but it doesn't seem to work!
require 'sequel'
db = Sequel.connect 'postgres://user:secret#localhost/foo'
tbl = "public__bar".to_sym #dynamically generated by the app
dat = {x: 1, y: 2}
id = db[tbl].insert(dat) #generated sql -- INSERT INTO "public"."bar" ("x", "y") VALUES (1, 2) RETURNING NULL
Don't know if it matters but the table in question is inherited (using postgres table inheritance)
ruby 1.9.3p392 (2013-02-22) [i386-mingw32]
sequel (3.44.0)
--Edit 1 -- After a bit of troubleshooting--
Looks like the table inheritance COULD BE the problem here. Sequel seems to run a query automatically to determine the pk of a table (in my case the pk's defined on a table up the chain), not finding which, perhaps the "returning null" is being appended?
SELECT pg_attribute.attname AS pk FROM pg_class, pg_attribute, pg_index, pg_namespace WHERE pg_class.oid = pg_attribute.attrelid AND pg_class.relnamespace = pg_namespace.oid AND
pg_class.oid = pg_index.indrelid AND pg_index.indkey[0] = pg_attribute.attnum AND pg_index.indisprimary = 't' AND pg_class.relname = 'bar'
AND pg_namespace.nspname = 'public'
--Edit 2--
Yup, looks like that's the problem!
If you are using PostgreSQL inheritance please note that the following are not inherited:
Primary Keys
Unique Constraints
Foreign Keys
In general you must declare these on each child table. Do for example:
CREATE TABLE my_parent (
id bigserial primary key,
my_value text not null unique
);
CREATE TABLE my_child() INHERITS (my_parent);
INSERT INTO my_child(id, my_value) values (1, 'test');
INSERT INTO my_child(id, my_value) values (1, 'test'); -- works, no error thrown
What you want instead is to do this:
CREATE TABLE my_parent (
id bigserial primary key,
my_value text not null unique
);
CREATE TABLE my_child(
primary key(id),
unique(my_value)
) INHERITS (my_parent);
INSERT INTO my_child(id, my_value) values (1, 'test');
INSERT INTO my_child(id, my_value) values (1, 'test'); -- unique constraint violation thrown
This sounds to me like you have some urgent DDL issues to fix.
You could retrofit the second's constraints onto the first with:
ALTER TABLE my_child ADD PRIMARY KEY(id);
ALTER TABLE my_child ADD UNIQUE (my_value);

Oracle Foreign Key Issues with Multi-Table Inserts and Blobs

We have a single table that we want to break up into a tree of tables based upon a particular source column. I wanted to try using a multi-column insert, but it seems that if I insert a blob into a sub table, I wind up with a foreign key constraint violation.
I don't think this violates the rules about multi-table inserts but I could be wrong...
I am hoping that someone could point me to some more in-depth resources around what is actually going on here, so that I can feel confident that whatever solution will work as part of a liquibase changeset on Oracle databases 9i -> 11g.
Hopefully Simplified Scenario
CREATE TABLE source (
pk NUMBER NOT NULL PRIMARY KEY,
type VARCHAR2(20) NOT NULL,
content VARCHAR2(20) NOT NULL
);
INSERT INTO source (pk,type,content) values (1,'two','n/a');
INSERT INTO source (pk,type,content) values (2,'one','Content');
CREATE TABLE dest (
pk NUMBER NOT NULL PRIMARY KEY,
type VARCHAR2(20) NOT NULL
);
CREATE TABLE dest_one (
pkfk NUMBER NOT NULL PRIMARY KEY,
data BLOB NOT NULL,
CONSTRAINT XFK1DEST_ONE FOREIGN KEY (pkfk) REFERENCES dest (pk)
);
CREATE TABLE dest_two (
pkfk NUMBER NOT NULL PRIMARY KEY,
CONSTRAINT XFK1DEST_TWO FOREIGN KEY (pkfk) REFERENCES dest (pk)
);
Source contains our original data. dest will be our parent table, with children dest_one and dest_two (which will contain information on things of type 'one' or 'two' respectively). Things of type one have content, but things of type two do not.
The Failed Attempt
INSERT ALL
WHEN 1=1 THEN INTO dest (pk,type) VALUES (pk,type)
WHEN type='one' THEN INTO dest_one (pkfk,data) VALUES (pk,content)
WHEN type='two' THEN INTO dest_two (pkfk) VALUES (pk)
SELECT pk,type,utl_raw.cast_to_raw(content) as content from source where type in ('one','two');
As previously mentioned, I wound up with a foreign key constraint violation here. To further illustrate that the blob was the issue I tried two seperate similar queries (below) realizing the one without the blob insert worked, but with the blob insert failed.
INSERT ALL
WHEN 1=1 THEN INTO dest (pk,type) VALUES (pk,type)
WHEN type='two' THEN INTO dest_two (pkfk) VALUES (pk)
SELECT pk,type,utl_raw.cast_to_raw(content) as content from source where type = 'two';
/* Successful */
INSERT ALL
WHEN 1=1 THEN INTO dest (pk,type) VALUES (pk,type)
WHEN type='one' THEN INTO dest_one (pkfk,data) VALUES (pk,content)
SELECT pk,type,utl_raw.cast_to_raw(content) as content from source where type = 'one';
/* ORA-02291: integrity constraint violated, no parent key */
Solution 1 - Traditional Inserts
INSERT INTO dest (pk,type) SELECT pk,type from source where type in ('one','two');
INSERT INTO dest_two (pkfk) SELECT pk from source where type = 'two';
INSERT INTO dest_one (pkfk,data) SELECT pk,utl_raw.cast_to_raw(content) from source where type = 'one';
One option I am considering is going back to multiple seperate insert statements, but unlike how I have stated them here, I'm concerned that I'll have to make sure I write my sub-table inserts to only attempt to insert those rows present in parent dest table... I need to do more research on how Liquibase handles multiple sql statements in the same changeset.
Solution 2 - Temporarily disabling foreign key constraints
ALTER TABLE dest_one DISABLE CONSTRAINT XFK1DEST_ONE;
INSERT ALL
WHEN 1=1 THEN INTO dest (pk,type) VALUES (pk,type)
WHEN type='one' THEN INTO dest_one (pkfk,data) VALUES (pk,content)
WHEN type='two' THEN INTO dest_two (pkfk) VALUES (pk)
SELECT pk,type,utl_raw.cast_to_raw(content) as content from source where type in ('one','two');
ALTER TABLE dest_one ENABLE CONSTRAINT XFK1DEST_ONE;
This is the solution I'm leaning toward. While disabling the foreign key on my blob table seems to make it work in my test environment (10g - 10.2.0.1.0), I'm not sure if I should also be disabling the foreign key on the non-blob table as well (due to how 9i, 11g, or other versions of 10g may behave). Any resources here too would be appreciated.
Thanks a bunch!
Another solution would be to defer the constraint evaluation until COMMIT. I suspect (but am not sure) that the multi-table insert is inserting rows in an order other than the one you expect and want. Recreate your constraints as follows:
ALTER TABLE DEST_ONE DROP CONSTRAINT XFK1DEST_ONE;
ALTER TABLE DEST_ONE
ADD CONSTRAINT XFK1DEST_ONE
FOREIGN KEY (pkfk) REFERENCES dest (pk)
INITIALLY DEFERRED DEFERRABLE;
ALTER TABLE DEST_TWO DROP CONSTRAINT XFK1DEST_TWO;
ALTER TABLE DEST_TWO
ADD CONSTRAINT XFK1DEST_TWO
FOREIGN KEY (pkfk) REFERENCES dest (pk)
INITIALLY DEFERRED DEFERRABLE;
This re-creates the constraints so that they can be deferred, and are deferred from the time they're created. Then try your original INSERT again.
Share and enjoy.

Resources