When I want to update the database with PutDatabaseRecord in Apache nifi 1.16.0 I got below error:
Routing to failure.: Must declare the scalar variable "#P7UpdateKey".
But there is no problem when insert in the database.
I think there was such a problem when I updated to version 1.16.0. But I'm not sure about that.
Related
I tried to insert a sql query using esql code:
INSERT INTO Database.dbo.CUSTOMERS Values (9330,'Sai',7);
It is working fine but it was show error when it tried to insert code using xml format like:
INSERT INTO Database.dbo.CUSTOMERS(ID,NAME,AGE) Values (InputRoot.XMLNSC.emps.emp.id,InputRoot.XMLNSC.emps.emp.name,InputRoot.XMLNSC.emps.emp.age);
Then it was showing errors like BIP2230E, BIP2488E, BIP2321E.
If there is any connectivity problem means first insert command also should not work. Select also working fine.
Any suggestions to resolve problem?
It is fairly obvious that your paths (InputRoot.XMLNSC.emps.emp.id, etc ) do not exist under InputRoot.XMLNSC. You could easily check this using the debugger or (better) a Trace node.
To fix the problem, correct those paths.
You should also be declaring and using a REFERENCE variable, to make your ESQL more readable:
-- This is not the correct path, otherwise your code would be working already!
DECLARE refEmp REFERENCE to InputRoot.XMLNSC.emps.emp;
INSERT
INTO Database.dbo.CUSTOMERS(ID,NAME,AGE)
VALUES (refEmp.id,refEmp.name,refEmp.age)
I'm try to pull the data from SQL Server and using the generate table fetch. When I use MYSQL database instead SQL Server for the same generate table fetch it's working as expected. Whenever I use to connect SQL Server I'm getting error as below.
GenerateTableFetch[id=07bed292-0162-1000-0000-00004bc12345] failed to process session due to java.lang.IllegalArgumentException: Order by clause cannot be null or empty when using row paging: Order by clause cannot be null or empty when using row paging
SQL Server Version: 2016
I gone through the below link and came to know that there is a bug for generate table fetch for SQL Server. However I'm not whether the bug is fixed or not.
https://github.com/apache/nifi/pull/1510
Nifi Version I'm using - 1.5
Could someone please let me know whether the bug is fixed or not, If not any work around solution for this bug.
Here is my flow.
Edit:
GenerateTableFetc:
This is a bug in some of the DatabaseAdapters in NiFi, using GenerateTableFetch with no Max-value Column set. In this case there's a workaround, you can use the 2008 driver, then a ReplaceText processor to replace "ORDER BY asc" with "ORDER BY newid() asc". I'm trying to find out everywhere this could be an issue, I'll write up a Jira to cover all the cases. The general symptom is OFFSET/LIMIT clauses without an ORDER BY clause.
I want to execute multiple semi-colon delimited Cypher statements in SQL Workbench/J the same way I can run multiple selected SQL statements with the Ctrl+E shortcut.
Here is a small example (modified from the movies dataset):
CREATE (TheMatrix:Movie {title:'The Matrix', released:1999, tagline:'Welcome to the Real World'});
CREATE (Keanu:Person {name:'Keanu Reeves', born:1964});
CREATE (Carrie:Person {name:'Carrie-Anne Moss', born:1967});
CREATE (Laurence:Person {name:'Laurence Fishburne', born:1961});
CREATE (Hugo:Person {name:'Hugo Weaving', born:1960});
CREATE (AndyW:Person {name:'Andy Wachowski', born:1967});
CREATE (LanaW:Person {name:'Lana Wachowski', born:1965});
CREATE (JoelS:Person {name:'Joel Silver', born:1952});
CREATE
(Keanu)-[:ACTED_IN {roles:['Neo']}]->(TheMatrix),
(Carrie)-[:ACTED_IN {roles:['Trinity']}]->(TheMatrix),
(Laurence)-[:ACTED_IN {roles:['Morpheus']}]->(TheMatrix),
(Hugo)-[:ACTED_IN {roles:['Agent Smith']}]->(TheMatrix),
(AndyW)-[:DIRECTED]->(TheMatrix),
(LanaW)-[:DIRECTED]->(TheMatrix),
(JoelS)-[:PRODUCED]->(TheMatrix);
I get the error "setEscapeProcessing is not supported by Neo4jStatement."
How to fix this?
I'm running my local Neo4j instance using Neo4j JDBC driver version 2.3.2.
These are my connection settings.
I'm using Ubuntu 14.04 LTS and Java 1.8.0_72-b15, SQL Workbench/J Build 119 (2016-01-31)
This question has already been answered in the SQL Workbench/J forum
Yes, you can set the property workbench.db.[dbid].ddl.disable.escapeprocessing to false, e.g. by using the following SQL statement
WbSetConfig workbench.db.[dbid].ddl.disable.escapeprocessing=false;
you need to replace [dbid] with the DBID that is generated for Hive. For details on the DBID please see here:
http://www.sql-workbench.net/manual/settings.html#dbid
Obviously in this case the DBID for Neo4J must be used, not for Hive (as stated in the forum answer because that question was initially for Hive)
I am working on migration project of Oracle to Teradata.
The tables have been migrated using datastage jobs.
How do I migrate Oracle Views to Teradata?
Direct script copying is not working due to SQL statements difference of both databases
Please help?
The DECODE() Oracle function is available as part of the Oracle UDF Library on the Teradata Developer Exchange Downloads section. Otherwise, you are using the DECODE function in your example in the same manner in which the ANSI COALESCE() function behaves:
COALESCE(t.satisfaction, 'Not Evaluated')
It should be noted that the data types of the COALESCE() function must be implicitly compatible or you will receive an error. Therefore, t.satisfaction would need to be at least CHAR(13) or VARCHAR(13) in order for the COALESCE() to evaluate. If it is not, you can explicitly cast the operand(s).
COALESCE(CAST(t.satisfaction AS VARCHAR(13)), 'Not Evaluated')
If your use of DECODE() includes more evaluations than what is in your example I would suggest implementing the UDF or replacing it with a more standard evaluated CASE statement. That being said, with Teradata 14 (or 14.1) you will find that many of the Oracle functions that are missing from Teradata will be made available as standard functions to help ease the migration path from Oracle to Teradata.
I'm trying to convert some Informix ESQL to Oracle Pro*C. In the existing Informix code the "SERIAL" data type was used to indicate automatically incrementing columns. According to the Oracle documentation, the Oracle Migration Workbench for Informix should be able to handle this, and it explains that it converts the "SERIAL" data type into a "NUMBER" with an associated Oracle sequence and trigger. However, when trying to run the tool it simply replaces the word "SERIAL" with "ERROR(SERIAL)", so I've been trying to manually add in the trigger/sequence.
Their example here: http://docs.oracle.com/html/B16022_01/ch2.htm#sthref112 shows a way that this can be done. The sequence appears to be fairly straight forward, however when trying to create a trigger like so:
CREATE TRIGGER clerk.TR_SEQ_11_1
BEFORE INSERT ON clerk.JOBS FOR EACH ROW
BEGIN
SELECT clerk.SEQ_11_1.nextval INTO :new.JOB_ID FROM dual; END;
The Pro*C preprocessor picks up the "CREATE" keyword here, and decides that I'm not allowed to use the host variable ":new.JOB_ID", because host variables cannot be used in conjunction with "CREATE" statements.
My question is, is there some way to create a trigger that links an Oracle sequence to a particular column without using a host variable to specify the column name? The Oracle documentation seems to indicate that their migration tool should be able to cope, which means there must be some way of doing this. However all the examples of the trigger use that I have found all use the host variable which causes the preprocessor to complain.
Thank you for your time.
(Note: I've used the trigger/sequence/column names from the example in the Oracle documentation in the example above.)
I managed to resolve the issue by using an "EXEC SQL EXECUTE IMMEDIATE" statement.
char sql_buf[4096+1];
snprintf(sql_buf, 4096, <sql>);
EXEC SQL IMMEDIATE :sql_buf;
This bypasses the preprocessor and therefore allows the statement through without complaint.
It is impossible to create a trigger that links an Oracle sequence to a particular column without using a "host variable" to specify the column name. By the way it isn't "host variable" - just reference. The same trigger may fire on update and insert for example, so you have to specify what you are referencing: new or old variables. You can do it in MS-SQL but not in Oracle.