Apache NiFi put value to serial column - apache-nifi

I have a database table with following structure:
CREATE TABLE fact.cabinet_account (
id serial NOT NULL,
account_name text NULL,
cabinet_id int4 NULL,
CONSTRAINT cabinet_account_account_name_key UNIQUE (account_name),
CONSTRAINT cabinet_account_pkey PRIMARY KEY (id),
CONSTRAINT cabinet_account_cabinet_id_fkey FOREIGN KEY (cabinet_id) REFERENCES fact.cabinet(id)
);
And I have a JSON from InvokeHttp which I want to put to database:
{
"login" : "some_maild#gmail.com",
"priority_level" : 5,
"is_archive" : false
}
I'm using QueryRecord with this script:
SELECT
19 AS cabinet_id,
login AS account_name
FROM FLOWFILE
I'm trying to UPSERT in PutDatabaseRecord processor and got the error:
ERROR: value NULL at column "id"
How to put value for serial column with Apache NiFi?
UPDATE
My JSON looks like (before PutDatabase):
[ {
"account_name" : "email1#maximagroup.ru",
"priority_level" : 1000,
"cabinet_id" : 19
}, {
"account_name" : "email2#gmail.com",
"priority_level" : 1,
"cabinet_id" : 19
}, {
"account_name" : "email3#umww.com",
"priority_level" : 1000,
"cabinet_id" : 19
} ]
PutDatabaseRecord looks like:

Try making the operation INSERT rather than UPSERT
An UPSERT needs to check whether the given id exists, to know if it should insert or update, which it can't do as no id is provided.

Related

Creating a dynamodb table using Lambda function (python) - error

I have defined 3 attributes in that table definition. agentId, agentName, agentRole. I want to create KeySchema on agentId (partitionkey) , agentRole (range key).
In my understanding the table can have 10 attributes. All those 10 attributes don't have to be part of the KeySchema. Because Keyschema is used to identify unique records. Right?
It throws the following error:
Response
{
"errorMessage": "An error occurred (ValidationException) when calling the
CreateTable operation: One or more parameter values were invalid: Number of attributes in
KeySchema does not exactly match number of attributes defined in AttributeDefinitions",
"errorType": "ClientError",
"requestId": "d8d07c59-f36c-4989-9ac2-6ada9d8f6521",
"stackTrace": [
" File \"/var/task/lambda_function.py\", line 8, in lambda_handler\n
response = client.create_table(\n",
" File \"/var/runtime/botocore/client.py\", line 391, in _api_call\n return
self._make_api_call(operation_name, kwargs)\n",
" File \"/var/runtime/botocore/client.py\", line 719, in _make_api_call\n
raise error_class(parsed_response, operation_name)\n"
]
}
import json
import boto3
client = boto3.client("dynamodb")
def lambda_handler(event, context):
response = client.create_table(
AttributeDefinitions=[
{
'AttributeName': 'agentId',
'AttributeType': 'N'
},
{
'AttributeName': 'agentRole',
'AttributeType': 'S'
},
{
'AttributeName': 'agentName',
'AttributeType': 'S'
}
],
TableName='CallCenterCallsTable,
KeySchema=[
{
'AttributeName': 'agentId',
'KeyType': 'HASH'
},
{
'AttributeName': 'agentRole',
'KeyType': 'RANGE'
}
],
BillingMode='PROVISIONED',
ProvisionedThroughput={
'ReadCapacityUnits': 1,
'WriteCapacityUnits': 1
}
)
print(response)
Remove agentName from the Attribute definitions.
See the documentation for Attribute Definitions:
Represents an attribute for describing the key schema for the table and indexes.
You aren't using agentName in the key schema or indexes, so it shouldn't be included in the table definition. DynamoDB is schemaless. You only need to define the hash key and sort key at creation time. DynamoDB doesn't care about any other attributes you may want to insert into your table.

How to pass JSON array in PLSQL Rest Service

I am trying to pass JSON array in Rest Service in EBS (12.2.10) but getting following error:
"-40491 ORA-40491: invalid input data type for JSON_TABLE"
I created following types:
CREATE OR REPLACE EDITIONABLE TYPE XRCL_CB_INBOUND_TALLY_OBJ AS OBJECT
(TRANSACTION_DATE VARCHAR2(30),
TRANSACTION_TYPE VARCHAR2(5),
ORGANIZATION_ID VARCHAR2(5),
DOCUMENT_ID VARCHAR2(25),
DOCUMENT_LINE_ID VARCHAR2(25),
SKU_CODE VARCHAR2(25),
QUANTITY VARCHAR2(10),
SUBINVENTORY VARCHAR2(25),
LOT_NUMBER VARCHAR2(25));
CREATE OR REPLACE EDITIONABLE TYPE XRCL_CB_INBOUND_TALLY_NT AS TABLE OF XRCL_CB_INBOUND_TALLY_OBJ;
Below is my json object which I am passing as parameter:
{
"TALLYQUANTITY_Input": {
"RESTHeader": {
"Responsibility": "ROCELL",
"RespApplication": "XRCL",
"SecurityGroup": "STANDARD",
"NLSLanguage": "AMERICAN"
},
"InputParameters": {
"P_TRANSACTION_LINES": [
{
"TRANSACTION_TYPE": "IO",
"TRANSACTION_DATE": "01/02/2022 12:00:00 AM",
"ORGANIZATION_ID": "121`enter code here`",
"DOCUMENT_ID": "1",
"DOCUMENT_LINE_ID": "1",
"SKU_CODE": "RC.001.000102.MA.03",
"QUANTITY": "1",
"LOT_NUMBER": "1013A.B.7.J.G",
"SUBINVENTORY": "Saleable"
}
]
}
}
}
Issue resolved by passing parameters in same order as defined in xsd sequence tag

How do I updated an Android Room column from notNull=true to notNull=false?

Problem: With Android Room, it uses a pre-populated database, I cannot seem to get the table columns to change from notNull=true to notNull=false? The pre-populated database schema is correct but I cannot get Android Room to update correctly to match:
What I have done: I edited the json schema file, removing the NOT NULL for the specific columns, and under the fields I updated the same field column information to "notNull": false. I tried a migration, not knowing if it was correct, using ALTER TABLE Notes ADD COLUMN 'QuestionID' INTEGER and it actually updated the json file to NOT NULL again. I can't seem to find information on how to do this? The Entity does not have these annotations and I wasn't sure it was necessary to define these at the Entity as this DB has other tables without these annotations and they are passing through compilation without issue. I'm sure this is another 80/20 rule where I'm stupid and missing something.
Example Table in the json file The Question, Quote, Term and Deleted fields need to be notNull=false and keep changing back to true... and the pre-populated table is correct.
"createSql": "CREATE TABLE IF NOT EXISTS `${TABLE_NAME}` (`NoteID` INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, `SourceID` INTEGER NOT NULL, `CommentID` INTEGER NOT NULL, `QuestionID` INTEGER NOT NULL, `QuoteID` INTEGER NOT NULL, `TermID` INTEGER NOT NULL, `TopicID` INTEGER NOT NULL, `Deleted` INTEGER NOT NULL, FOREIGN KEY(`SourceID`) REFERENCES `Source`(`SourceID`) ON UPDATE NO ACTION ON DELETE NO ACTION , FOREIGN KEY(`CommentID`) REFERENCES `Comment`(`CommentID`) ON UPDATE NO ACTION ON DELETE NO ACTION , FOREIGN KEY(`TopicID`) REFERENCES `Topic`(`TopicID`) ON UPDATE NO ACTION ON DELETE NO ACTION )",
"fields": [
{
"fieldPath": "noteID",
"columnName": "NoteID",
"affinity": "INTEGER",
"notNull": true
},
{
"fieldPath": "sourceID",
"columnName": "SourceID",
"affinity": "INTEGER",
"notNull": true
},
{
"fieldPath": "commentID",
"columnName": "CommentID",
"affinity": "INTEGER",
"notNull": true
},
{
"fieldPath": "questionID",
"columnName": "QuestionID",
"affinity": "INTEGER",
"notNull": true
},
{
"fieldPath": "quoteID",
"columnName": "QuoteID",
"affinity": "INTEGER",
"notNull": true
},
{
"fieldPath": "termID",
"columnName": "TermID",
"affinity": "INTEGER",
"notNull": true
},
{
"fieldPath": "topicID",
"columnName": "TopicID",
"affinity": "INTEGER",
"notNull": true
},
{
"fieldPath": "deleted",
"columnName": "Deleted",
"affinity": "INTEGER",
"notNull": true
}```
The schema in the json file is generated and based upon the Entity, changing it will make no difference. It isn't even required (except if using AutoMigration).
The pre-populated database schema is correct but I cannot get Android Room to update correctly to match:
You have to either change the Entities accordingly or convert the pre-populated database accordingly. Noting again that the Entities define what Room expects.
The language used matters as to the exact answer.
With Kotlin then Notes could be:-
data class Note(
#PrimaryKey(autoGenerate = true)
val NoteId: Long,
val SourceID: Long?,
val CommentID: Long?,
val QuestionID: Long?,
val QuoteID: Long?,
val TermID: Long, //<<<<< NOT NULL
val TopicID: Long?,
val Deleted: Long?
)
The generated java then shows the table create as :-
_db.execSQL("CREATE TABLE IF NOT EXISTS `Note` (`NoteId` INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, `SourceID` INTEGER, `CommentID` INTEGER, `QuestionID` INTEGER, `QuoteID` INTEGER, `TermID` INTEGER NOT NULL, `TopicID` INTEGER, `Deleted` INTEGER, FOREIGN KEY(`SourceID`) REFERENCES `Source`(`SourceID`) ON UPDATE NO ACTION ON DELETE NO ACTION , FOREIGN KEY(`CommentID`) REFERENCES `Comment`(`CommentID`) ON UPDATE NO ACTION ON DELETE NO ACTION , FOREIGN KEY(`TopicID`) REFERENCES `Topic`(`TopicID`) ON UPDATE NO ACTION ON DELETE NO ACTION )");
i.e. those with Long? do not have NOT NULL (the TermID column has NOT NULL as Long instead of Long? was used).
With Java the column type cannot be a primitive type for NULLs to be allowed, as these MUST have a value and cannot be null, so Room will derive NOT NULL. Just the object type (e.g. Long not long) will be taken as NULLs allowed. To force NOT NULL then the #NotNull annotation needs to be used.
So Java equivalent (named JavaNote to allow both to be used/compiled) could be :-
#Entity(
foreignKeys = {
#ForeignKey(entity = Source.class,parentColumns = {"SourceID"},childColumns = {"SourceID"}),
#ForeignKey(entity = Comment.class,parentColumns = {"CommentID"},childColumns = {"CommentID"}),
#ForeignKey(entity = Topic.class,parentColumns = {"TopicID"}, childColumns = {"TopicID"})
}
)
class JavaNote {
#PrimaryKey(autoGenerate = true)
long NoteID=0; // primitives cannot be NULL thus imply NOT NULL
Long SourceID;
Long CommentID;
Long QuestionID;
Long QuoteID;
#NotNull
Long TermID; // or long TermID
Long TopicID;
Long Deleted;
}
The generated java then has the table create as :-
_db.execSQL("CREATE TABLE IF NOT EXISTS `JavaNote` (`NoteID` INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, `SourceID` INTEGER, `CommentID` INTEGER, `QuestionID` INTEGER, `QuoteID` INTEGER, `TermID` INTEGER NOT NULL, `TopicID` INTEGER, `Deleted` INTEGER, FOREIGN KEY(`SourceID`) REFERENCES `Source`(`SourceID`) ON UPDATE NO ACTION ON DELETE NO ACTION , FOREIGN KEY(`CommentID`) REFERENCES `Comment`(`CommentID`) ON UPDATE NO ACTION ON DELETE NO ACTION , FOREIGN KEY(`TopicID`) REFERENCES `Topic`(`TopicID`) ON UPDATE NO ACTION ON DELETE NO ACTION )");
again the TermID has been purposefully coded to use NOT NULL
The generated java is available after compiling.
It is found in the generated java (use Android View) in the member/class name the same as the class annotated with #Database suffixed with _Impl. The statements themselves are in the createAlltables method.
e.g. :-

AUTO_INCREMENT in H2 database doesn't work when requesting with Postman

I want to persist TODOs in a H2 DB facilitating a Spring Boot application.
The following SQL script initializes the DB and it works properly:
DROP TABLE IF EXISTS todos;
CREATE TABLE todos (
id INT AUTO_INCREMENT PRIMARY KEY,
title VARCHAR(50) NOT NULL UNIQUE,
description VARCHAR(250) NOT NULL,
completion_date DATE,
priority VARCHAR(6) CHECK(priority IN ('LOW', 'MEDIUM', 'HIGH'))
);
INSERT INTO todos (title, description, priority) VALUES
('Create xxx Todo', 'An xxx-TODO must be created.', 'HIGH'),
('Delete xxx Todo', 'An xxx-TODO must be deleted.', 'HIGH'),
('Update xxx Todo', 'An xxx-TODO must be updated.', 'MEDIUM'),
('Complete xxx Todo', 'An xxx-TODO must be completed.', 'LOW');
Console output when starting Spring Boot:
Hibernate: drop table if exists todos CASCADE
Hibernate: drop sequence if exists hibernate_sequence
Hibernate: create sequence hibernate_sequence start with 1 increment by 1
Hibernate: create table todos (id bigint not null, completion_date date, description varchar(250) not null, priority varchar(250) not null, title varchar(50) not null, primary key (id))
Hibernate: alter table todos add constraint UK_c14g1nqfdaaixe1nyw25h3t0n unique (title)
I implemented controller, service and repositiory in Java within the Spring Boot application.
I used Postman to test the implemented functionality and getting all Todos works well but creating a Todo fails for the first 4 times because of an
org.h2.jdbc.JdbcSQLIntegrityConstraintViolationException: Unique index or primarky key violated: "PRIMARY KEY ON PUBLIC.TODOS(ID) [1, 'Create xxx Todo', 'An xxx TODO must be created.', NULL, 'HIGH']"
This is the request body:
{
"title": "Creating xxxx Todo via API",
"description": "An xxxx TODO was created via API.",
"id": null,
"completionDate": null,
"priority": "LOW"
}
This exception occurs 4 times with the following response:
{
"timestamp": "2021-05-25T17:32:57.129+00:00",
"status": 500,
"error": "Internal Server Error",
"message": "",
"path": "/api/todo/create"
}
With the fifth attempt the Todo gets created:
{
"title": "Create xxxx Todo via API",
"description": "An xxxx TODO was created via API.",
"id": 5,
"completionDate": null,
"priority": "LOW"
}
and the ID 5 was assigned to this record.
Hence, the problem seems to be the number of inserted records during the H2 start-up when Spring Boot starts and initializes the H2 database.
In the Todo entity I annotated the id as follows:
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
private Long id;
How can I solve this problem that when I try to access the creation endpoint of the Spring Boot application via postman?

Upsert Multiple Records with MongoDb

I'm trying to get MongoDB to upsert multiple records with the following query, ultimately using MongoMapper and the Mongo ruby driver.
db.foo.update({event_id: { $in: [1,2]}}, {$inc: {visit:1}}, true, true)
This works fine if all the records exist, but does not create new records for records that do not exist. The following command has the desired effect from the shell, but is probably not ideal from the ruby driver.
[1,2].forEach(function(id) {db.foo.update({event_id: id}, {$inc: {visit:1}}, true, true) });
I could loop through each id I want to insert from within ruby, but that would necessitate a trip to the database for each item. Is there a way to upsert multiple items from the ruby driver with only a single trip to the database? What's the best practice here? Using mongomapper and the ruby driver, is there a way to send multiple updates in a single batch, generating something like the following?
db.foo.update({event_id: 1}, {$inc: {visit:1}}, true); db.foo.update({event_id: 2}, {$inc: {visit:1}}, true);
Sample Data:
Desired data after command if two records exist.
{ "_id" : ObjectId("4d6babbac0d8bb8238d02099"), "event_id" : 1, "visit" : 11 }
{ "_id" : ObjectId("4d6baf56c0d8bb8238d0209a"), "event_id" : 2, "visit" : 2 }
Actual data after command if two records exist.
{ "_id" : ObjectId("4d6babbac0d8bb8238d02099"), "event_id" : 1, "visit" : 11 }
{ "_id" : ObjectId("4d6baf56c0d8bb8238d0209a"), "event_id" : 2, "visit" : 2 }
Desired data after command if only the record with event_id 1 exists.
{ "_id" : ObjectId("4d6babbac0d8bb8238d02099"), "event_id" : 1, "visit" : 2 }
{ "_id" : ObjectId("4d6baf56c0d8bb8238d0209a"), "event_id" : 2, "visit" : 1 }
Actual data after command if only the record with event_id 1 exists.
{ "_id" : ObjectId("4d6babbac0d8bb8238d02099"), "event_id" : 1, "visit" : 2 }
This - correctly - will not insert any records with event_id 1 or 2 if they do not already exist
db.foo.update({event_id: { $in: [1,2]}}, {$inc: {visit:1}}, true, true)
This is because the objNew part of the query (see http://www.mongodb.org/display/DOCS/Updating#Updating-UpsertswithModifiers) does not have a value for field event_id. As a result, you will need at least X+1 trips to the database, where X is the number of event_ids, to ensure that you insert a record if one does not exist for a particular event_id (the +1 comes from the query above, which increases the visits counter for existing records). To say it in a different way, how does MongoDB know you want to use value 2 for the event_id and not 1? And why not 6?
W.r.t. batch insertion with ruby, I think it is possible as the following link suggests - although I've only used the Java driver: Batch insert/update using Mongoid?
What you are after is the Find and Modify command with the upsert option set to true. See the example from the Mongo test suite (same one linked to in the Find and Modify docs) for an example that looks very much like what you describe in your question.
I found a way to do this using the eval operator for server-side code execution. Here is the code snippit:
def batchpush(body, item_opts = {})
#batch << {
:body => body,
:duplicate_key => item_opts[:duplicate_key] || Mongo::Dequeue.generate_duplicate_key(body),
:priority => item_opts[:priority] || #config[:default_priority]
}
end
def batchprocess()
js = %Q|
function(batch) {
var nowutc = new Date();
var ret = [];
for(i in batch){
e = batch[i];
//ret.push(e);
var query = {
'duplicate_key': e.duplicate_key,
'complete': false,
'locked_at': null
};
var object = {
'$set': {
'body': e.body,
'inserted_at': nowutc,
'complete': false,
'locked_till': null,
'completed_at': null,
'priority': e.priority,
'duplicate_key': e.duplicate_key,
'completecount': 0
},
'$inc': {'count': 1}
};
db.#{collection.name}.update(query, object, true);
}
return ret;
}
|
cmd = BSON::OrderedHash.new
cmd['$eval'] = js
cmd['args'] = [#batch]
cmd['nolock'] = true
result = collection.db.command(cmd)
#batch.clear
#pp result
end
Multiple items are added with batchpush(), and then batchprocess() is called. The data is sent as an array, and the commands are all executed. This code is used in the MongoDequeue GEM, in this file.
Only one request is made, and all the upserts happen server-side.

Resources