How do I updated an Android Room column from notNull=true to notNull=false? - android-room

Problem: With Android Room, it uses a pre-populated database, I cannot seem to get the table columns to change from notNull=true to notNull=false? The pre-populated database schema is correct but I cannot get Android Room to update correctly to match:
What I have done: I edited the json schema file, removing the NOT NULL for the specific columns, and under the fields I updated the same field column information to "notNull": false. I tried a migration, not knowing if it was correct, using ALTER TABLE Notes ADD COLUMN 'QuestionID' INTEGER and it actually updated the json file to NOT NULL again. I can't seem to find information on how to do this? The Entity does not have these annotations and I wasn't sure it was necessary to define these at the Entity as this DB has other tables without these annotations and they are passing through compilation without issue. I'm sure this is another 80/20 rule where I'm stupid and missing something.
Example Table in the json file The Question, Quote, Term and Deleted fields need to be notNull=false and keep changing back to true... and the pre-populated table is correct.
"createSql": "CREATE TABLE IF NOT EXISTS `${TABLE_NAME}` (`NoteID` INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, `SourceID` INTEGER NOT NULL, `CommentID` INTEGER NOT NULL, `QuestionID` INTEGER NOT NULL, `QuoteID` INTEGER NOT NULL, `TermID` INTEGER NOT NULL, `TopicID` INTEGER NOT NULL, `Deleted` INTEGER NOT NULL, FOREIGN KEY(`SourceID`) REFERENCES `Source`(`SourceID`) ON UPDATE NO ACTION ON DELETE NO ACTION , FOREIGN KEY(`CommentID`) REFERENCES `Comment`(`CommentID`) ON UPDATE NO ACTION ON DELETE NO ACTION , FOREIGN KEY(`TopicID`) REFERENCES `Topic`(`TopicID`) ON UPDATE NO ACTION ON DELETE NO ACTION )",
"fields": [
{
"fieldPath": "noteID",
"columnName": "NoteID",
"affinity": "INTEGER",
"notNull": true
},
{
"fieldPath": "sourceID",
"columnName": "SourceID",
"affinity": "INTEGER",
"notNull": true
},
{
"fieldPath": "commentID",
"columnName": "CommentID",
"affinity": "INTEGER",
"notNull": true
},
{
"fieldPath": "questionID",
"columnName": "QuestionID",
"affinity": "INTEGER",
"notNull": true
},
{
"fieldPath": "quoteID",
"columnName": "QuoteID",
"affinity": "INTEGER",
"notNull": true
},
{
"fieldPath": "termID",
"columnName": "TermID",
"affinity": "INTEGER",
"notNull": true
},
{
"fieldPath": "topicID",
"columnName": "TopicID",
"affinity": "INTEGER",
"notNull": true
},
{
"fieldPath": "deleted",
"columnName": "Deleted",
"affinity": "INTEGER",
"notNull": true
}```

The schema in the json file is generated and based upon the Entity, changing it will make no difference. It isn't even required (except if using AutoMigration).
The pre-populated database schema is correct but I cannot get Android Room to update correctly to match:
You have to either change the Entities accordingly or convert the pre-populated database accordingly. Noting again that the Entities define what Room expects.
The language used matters as to the exact answer.
With Kotlin then Notes could be:-
data class Note(
#PrimaryKey(autoGenerate = true)
val NoteId: Long,
val SourceID: Long?,
val CommentID: Long?,
val QuestionID: Long?,
val QuoteID: Long?,
val TermID: Long, //<<<<< NOT NULL
val TopicID: Long?,
val Deleted: Long?
)
The generated java then shows the table create as :-
_db.execSQL("CREATE TABLE IF NOT EXISTS `Note` (`NoteId` INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, `SourceID` INTEGER, `CommentID` INTEGER, `QuestionID` INTEGER, `QuoteID` INTEGER, `TermID` INTEGER NOT NULL, `TopicID` INTEGER, `Deleted` INTEGER, FOREIGN KEY(`SourceID`) REFERENCES `Source`(`SourceID`) ON UPDATE NO ACTION ON DELETE NO ACTION , FOREIGN KEY(`CommentID`) REFERENCES `Comment`(`CommentID`) ON UPDATE NO ACTION ON DELETE NO ACTION , FOREIGN KEY(`TopicID`) REFERENCES `Topic`(`TopicID`) ON UPDATE NO ACTION ON DELETE NO ACTION )");
i.e. those with Long? do not have NOT NULL (the TermID column has NOT NULL as Long instead of Long? was used).
With Java the column type cannot be a primitive type for NULLs to be allowed, as these MUST have a value and cannot be null, so Room will derive NOT NULL. Just the object type (e.g. Long not long) will be taken as NULLs allowed. To force NOT NULL then the #NotNull annotation needs to be used.
So Java equivalent (named JavaNote to allow both to be used/compiled) could be :-
#Entity(
foreignKeys = {
#ForeignKey(entity = Source.class,parentColumns = {"SourceID"},childColumns = {"SourceID"}),
#ForeignKey(entity = Comment.class,parentColumns = {"CommentID"},childColumns = {"CommentID"}),
#ForeignKey(entity = Topic.class,parentColumns = {"TopicID"}, childColumns = {"TopicID"})
}
)
class JavaNote {
#PrimaryKey(autoGenerate = true)
long NoteID=0; // primitives cannot be NULL thus imply NOT NULL
Long SourceID;
Long CommentID;
Long QuestionID;
Long QuoteID;
#NotNull
Long TermID; // or long TermID
Long TopicID;
Long Deleted;
}
The generated java then has the table create as :-
_db.execSQL("CREATE TABLE IF NOT EXISTS `JavaNote` (`NoteID` INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, `SourceID` INTEGER, `CommentID` INTEGER, `QuestionID` INTEGER, `QuoteID` INTEGER, `TermID` INTEGER NOT NULL, `TopicID` INTEGER, `Deleted` INTEGER, FOREIGN KEY(`SourceID`) REFERENCES `Source`(`SourceID`) ON UPDATE NO ACTION ON DELETE NO ACTION , FOREIGN KEY(`CommentID`) REFERENCES `Comment`(`CommentID`) ON UPDATE NO ACTION ON DELETE NO ACTION , FOREIGN KEY(`TopicID`) REFERENCES `Topic`(`TopicID`) ON UPDATE NO ACTION ON DELETE NO ACTION )");
again the TermID has been purposefully coded to use NOT NULL
The generated java is available after compiling.
It is found in the generated java (use Android View) in the member/class name the same as the class annotated with #Database suffixed with _Impl. The statements themselves are in the createAlltables method.
e.g. :-

Related

Vue js and Laravel problem with computed properties can't save data in database

Iam beginner in vue js and i need help.I'm working on invoice application and i want when select customer to auto fill data properties and i'm using computed properties to get data which i need.
I succeed to save only once in database and then it gives me error in laravelI see in vue devtool that computed properties remain the same like first time and they don't change except when i click on component in vuetools and they will be saved again.This is my data form:
form: {
customer_id: null,
project_id: null,
invoice_date: this.invoice_data.invoice_date,
due_date: null,
invoice_number: this.invoice_data.invoice_number,
item_id: null,
billing_address: null,
billing_city: null,
billing_state: null,
billing_zip_code: null,
billing_country_id : null,
shipping_country_id: null,
shipping_address: null,
shipping_city: null,
shipping_state: null,
shipping_zip_code: null,
items_test: null,
},
This is my computed property how i find data when i select customer:
customer_object: function() {
return this.customers.find(customer => customer.id === this.form.customer_id) ?? ""
},
These are my props:
props: {
customers: Object,
users: Object,
countries: Object,
currencies:Object,
items:Object,
payment_modes: Object,
invoice_data: Object,
// invoice_items:Object,
products:Object,
// projects:Object,
// customers_projects: Object,
// filters: Object,
// groups: Object,
errors: Object,
},
This is another computed how i try to put values in form when i select customer:
formCount: function () {
return [
this.form.billing_address = this.customer_object.address,
this.form.billing_city = this.customer_object.city,
this.form.billing_state = this.customer_object.state,
this.form.billing_zip_code = this.customer_object.zip_code,
this.form.billing_country_id = this.customer_object.country_id,
this.form.shipping_city = this.customer_object.shipping_city,
this.form.shipping_state = this.customer_object.shipping_state,
this.form.shipping_address = this.customer_object.shipping_address,
this.form.shipping_zip_code = this.customer_object.shipping_zip_code,
this.form.shipping_country_id = this.customer_object.shipping_country_id,
]
},

keystonejs form a multi-column unique constraint

How to form a unique constraint with multiple fields in keystonejs?
const Redemption = list({
access: allowAll,
fields: {
program: relationship({ ref: 'Program', many: false }),
type: text({ label: 'Type', validation: { isRequired: true }, isIndexed: 'unique' }),
name: text({ label: 'name', validation: { isRequired: true }, isIndexed: 'unique' }),
},
//TODO: validation to check that program, type, name form a unique constraint
})
The best way I can think to do this currently is by adding another field to the list and concatenating your other values into it using a hook. This lets you enforces uniqueness across these three values (combine) at the DB-level.
The list config (and hook) might look like this:
const Redemption = list({
access: allowAll,
fields: {
program: relationship({ ref: 'Program', many: false }),
type: text({ validation: { isRequired: true } }),
name: text({ validation: { isRequired: true } }),
compoundKey: text({
isIndexed: 'unique',
ui: {
createView: { fieldMode: 'hidden' },
itemView: { fieldMode: 'read' },
listView: { fieldMode: 'hidden' },
},
graphql: { omit: ['create', 'update'] },
}),
},
hooks: {
resolveInput: async ({ item, resolvedData }) => {
const program = resolvedData.program?.connect.id || ( item ? item?.programId : 'none');
const type = resolvedData.type || item?.type;
const name = resolvedData.name || item?.name;
resolvedData.compoundKey = `${program}-${type}-${name}`;
return resolvedData;
},
}
});
Few things to note here:
I've removed the isIndexed: 'unique' config for the main three fields. If I understand the problem you're trying to solve correctly, you actually don't want these values (on their own) to be distinct.
I've also remove the label config from your example. The label defaults to the field key so, in your example, that config is redundant.
As you can see, I've added the compoundKey field to store our composite values:
The ui settings make the field appear as uneditable in the UI
The graphql settings block updates on the API too (you could do the same thing with access control but I think just omitting the field is a bit cleaner)
And of course the unique index, which will be enforced by the DB
I've used a resolveInput hook as it lets you modify data before it's saved. To account for both create and update operations we need to consult both the resolvedData and item arguments - resolvedData gives us new/updated values (but undefined for any fields not being updated) and item give us the existing values in the DB. By combining values from both we can build the correct compound key each time and add it to the returned object.
And it works! When creating a redemption we'll be prompted for the 3 main fields (the compound key is hidden):
And the compound key is correctly set from the values entered:
Editing any of the values also updates the compound key:
Note that the compound key field is read-only for clarity.
And if we check the resultant DB structure, we can see our unique constraint being enforced:
CREATE TABLE "Redemption" (
id text PRIMARY KEY,
program text REFERENCES "Program"(id) ON DELETE SET NULL ON UPDATE CASCADE,
type text NOT NULL DEFAULT ''::text,
name text NOT NULL DEFAULT ''::text,
"compoundKey" text NOT NULL DEFAULT ''::text
);
CREATE UNIQUE INDEX "Redemption_pkey" ON "Redemption"(id text_ops);
CREATE INDEX "Redemption_program_idx" ON "Redemption"(program text_ops);
CREATE UNIQUE INDEX "Redemption_compoundKey_key" ON "Redemption"("compoundKey" text_ops);
Attempting to violate the constraint will produce an error:
If you wanted to customise this behaviour you could implement a validateInput hook and return a custom ValidationFailureError message.

Apache NiFi put value to serial column

I have a database table with following structure:
CREATE TABLE fact.cabinet_account (
id serial NOT NULL,
account_name text NULL,
cabinet_id int4 NULL,
CONSTRAINT cabinet_account_account_name_key UNIQUE (account_name),
CONSTRAINT cabinet_account_pkey PRIMARY KEY (id),
CONSTRAINT cabinet_account_cabinet_id_fkey FOREIGN KEY (cabinet_id) REFERENCES fact.cabinet(id)
);
And I have a JSON from InvokeHttp which I want to put to database:
{
"login" : "some_maild#gmail.com",
"priority_level" : 5,
"is_archive" : false
}
I'm using QueryRecord with this script:
SELECT
19 AS cabinet_id,
login AS account_name
FROM FLOWFILE
I'm trying to UPSERT in PutDatabaseRecord processor and got the error:
ERROR: value NULL at column "id"
How to put value for serial column with Apache NiFi?
UPDATE
My JSON looks like (before PutDatabase):
[ {
"account_name" : "email1#maximagroup.ru",
"priority_level" : 1000,
"cabinet_id" : 19
}, {
"account_name" : "email2#gmail.com",
"priority_level" : 1,
"cabinet_id" : 19
}, {
"account_name" : "email3#umww.com",
"priority_level" : 1000,
"cabinet_id" : 19
} ]
PutDatabaseRecord looks like:
Try making the operation INSERT rather than UPSERT
An UPSERT needs to check whether the given id exists, to know if it should insert or update, which it can't do as no id is provided.

Create complex argument-driven queries from AWS Lambda?

Look for // HERE IS THE PROBLEM PART sentence to find code that is the problem.
I am trying to implement AppSync using AWS Lambda (that connects to RDS Postgres server) as a data source. I want to create puKnowledgeFile query that will update my KnowledgeFile with optional arguments. If the client only provided htmlText and properties as arguments, then my update query should only update these two fields.
type Mutation {
putKnowledgeFile(
id: ID!,
htmlText: String,
plainText: String,
properties: AWSJSON
): KnowledgeFile
}
type KnowledgeFile {
id: ID!
htmlText: String!
plainText: String!
properties: AWSJSON!
lastDateTimeModified: AWSDateTime!
dateTimeCreated: AWSDateTime!
}
Here is an piece of AWS Lambda code:
exports.handler = async (event, context, callback) => {
/* Connecting to Postgres */
let data = null;
let query = ``;
let values = [];
switch (event.info.fieldName) {
case "putKnowledgeFile":
if(event.arguments.htmlText === undefined &&
event.arguments.plainText === undefined &&
event.arguments.properties === undefined) {
callback(`At least one argument except id should be provided in putKnowledgeFile request`);
}
// HERE IS THE PROBLEM PART
query += `update knowledge_file`
query += `
set `;
let index = 0;
for (let fieldName in event.arguments) {
if(arguments.hasOwnProperty(fieldName)) {
const fieldValue = event.arguments[fieldName];
if(index === 0) {
query += `${fieldName}=$${index+1}`
values.push(fieldValue);
} else {
query += `, ${fieldName}=$${index+1}`
values.push(fieldValue);
}
index++;
}
}
query += `
where knowledge_file.id = $${index+1};`;
values.push(event.arguments.id);
// HERE IS THE PROBLEM PART
break;
default:
callback(`There is no functionality to process this field: ${event.info.fieldName}`);
return;
}
let res = null;
try {
res = await client.query(query, values); // just sending created query
} catch(error) {
console.log("#client.query");
console.log(error);
}
/* DisConnecting from Postgres */
callback(null, res.rows);
};
Basically, this algorithm creates my query string through multiple string concatenations. I think it's too complicated and error-prone. Is there a way to create dynamic queries based on the presence / absence of certain arguments easily?
Just in case, here is my PostgreSQL schema:
-- main client object for clients
CREATE TABLE client (
id bigserial primary key,
full_name varchar(255)
);
-- knowledge_file
create table knowledge_file (
id bigserial primary key,
html_text text,
plain_text text,
properties jsonb,
last_date_modified timestamptz,
date_created timestamptz,
word_count varchar(50)
);
-- which client holds which knowledge file
create TABLE client_knowledge_file (
id bigserial primary key,
client_id bigint not null references client(id),
knowledge_file_id bigint not null references knowledge_file(id) unique ON DELETE CASCADE
);
I know this is not an optimum solution and might not completely answer your question but I also ran into similar problem and this is how I solved it.
I created a resolver pipeline.
In one function, I used the select statement to get the current
record.
In second function, I checked if the fields (in your case htmlText and properties) are null. If true, then use the ctx.prev.result values otherwise use the new ones).
Practical example
First resolver function:
{
"version": "2018-05-29",
"statements": [
"select id, html_text AS \"htmlText\", plain_text AS \"plainText\", properties, last_date_modified AS \"lastDateTimeModified\", date_created AS \"dateTimeCreated\" from knowledge_file where id = $ctx.args.Id"
]
}
Second resolver function:
#set($htmlText = $util.defaultIfNull($ctx.args.htmlText , $ctx.prev.result.htmlText))
#set($properties = $util.defaultIfNull($ctx.args.properties , $ctx.prev.result.properties))
{
"version": "2018-05-29",
"statements": [
"update knowledge_file set html_text = $htmlText, plain_text = $ctx.args.plainText, properties = $properties, last_date_modified = CURRENT_TIMESTAMP, date_created = CURRENT_DATE where id = $ctx.args.Id returning id, html_text AS \"htmlText\", plain_text AS \"plainText\", properties, last_date_modified AS \"lastDateTimeModified\", date_created AS \"dateTimeCreated\""
]
}

Handle graphql schema stitching error in child query

I am new to graphql and want to understand the concept here. I have this graphql schema (stitched using graphic-tools). Not all cars have registration. So if I query for 5 cars and one car doesn’t have a registration (no id to link between cars and registration), my whole query fails.
How do I handle this and return null for that 1 car and return registration details for the other 4?
{
Vehicles {
Cars {
id
registration {
id
}
}
}
}
If you mark a field as non-null (by appending ! to the type) and then resolve that field to null, GraphQL will always throw an error -- that's unavoidable. If it's possible for a field to end up null in the normal operation of your API, you should probably make it nullable.
However, errors "bubble up" to the nearest nullable field.
So given a schema like this:
type Query {
cars: [Car!]!
}
type Car {
registration: Registration!
}
and this query
{
cars {
registrations
}
}
resolving the registration field for any one Car to null will result in the following because the cars field is non-null and each individual Car must also not be null:
{
"data": null,
"errors": [...]
}
If you make the cars field nullable ([Car!]), the error will stop there:
{
"data": {
"cars": null
},
"errors": [...]
}
However, you can make each Car nullable (whether the field is or not), which will let the error stop there and result in an array of objects and nulls (the nulls being the cars that errored). So making the cars type [Car]! or [Car] will give us:
{
"data": {
"cars": [{...}, {...}, null]
},
"errors": [...]
}

Resources