I'm try to pull the data from SQL Server and using the generate table fetch. When I use MYSQL database instead SQL Server for the same generate table fetch it's working as expected. Whenever I use to connect SQL Server I'm getting error as below.
GenerateTableFetch[id=07bed292-0162-1000-0000-00004bc12345] failed to process session due to java.lang.IllegalArgumentException: Order by clause cannot be null or empty when using row paging: Order by clause cannot be null or empty when using row paging
SQL Server Version: 2016
I gone through the below link and came to know that there is a bug for generate table fetch for SQL Server. However I'm not whether the bug is fixed or not.
https://github.com/apache/nifi/pull/1510
Nifi Version I'm using - 1.5
Could someone please let me know whether the bug is fixed or not, If not any work around solution for this bug.
Here is my flow.
Edit:
GenerateTableFetc:
This is a bug in some of the DatabaseAdapters in NiFi, using GenerateTableFetch with no Max-value Column set. In this case there's a workaround, you can use the 2008 driver, then a ReplaceText processor to replace "ORDER BY asc" with "ORDER BY newid() asc". I'm trying to find out everywhere this could be an issue, I'll write up a Jira to cover all the cases. The general symptom is OFFSET/LIMIT clauses without an ORDER BY clause.
Related
I’m extracting data from a table in Oracle.
I have an ODBC connection manager to the Oracle database and the query for extraction should include a where clause because the table contain transactional data and there is no reason to extract it all every time.
I want initialize the table once and do it in with a For Loop which will iterate the whole table.
Since it’s an ODBC connection I can’t just put a where clause because I need to use a variable hence I realized I need to parameterize the DataFlow task and write my query at the sqlcommand property containing the ODBC source.
The property value is:
SELECT *
FROM DDC.DDC_SALES_TBL
WHERE trunc(CALDAY) between to_date('"+ #[User::vstart]+"','MM/DD/YYYY')
and to_date('"+ #[User::vstop]+"','MM/DD/YYYY')
Where the #vstart and #vstop are variables containing the ‘from/to’ dates to be extracted based on a DATEADD function and another variable (#vcount) which supposed to be the iterator as follows:
(DT_WSTR, 2) MONTH( DATEADD( "day", #[User::vcount] , GETDATE() ) )+"/"+
(DT_WSTR, 2) DAY( DATEADD( "day", #[User::vcount] , GETDATE() ) )+"/"+
(DT_WSTR, 4) YEAR( DATEADD( "day", #[User::vcount] , GETDATE() ) )
What’s happening is that the first iteration works fine but the second one generates an error and the package fails.
I marked the variable as EvaluateAsExpression=True
I also marked the DelayValidation=True in both the For Loop and the DataFlow tasks.
The errors are:
(1)Data Flow Task:Error: SQLSTATE: HY010, Message: [Microsoft][ODBC Driver Manager] Function sequence error;
(2) Data Flow Task:Error: SSIS Error Code DTS_E_INDUCEDTRANSFORMFAILUREONERROR. The "ODBC Source.Outputs[ODBC Source Output]" failed because error code 0xC020F450 occurred, and the error row disposition on "ODBC Source" specifies failure on error. An error occurred on the specified object of the specified component. There may be error messages posted before this with more information about the failure.
(3) Data Flow Task:Error: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on ODBC Source returned error code 0xC0209029. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure.
Please assist.
I don't know why initially i didn't use OLEDB, as I thought it doesn't work.
What i tried was to use create an OLEDB via oracle driver and the connection manager worked so i used it.
As this way you can parameterize the source directly and the loop worked just fine.
Don't know what cause the conflict with the OBDC source but that's my workaround.
I didn't find a way to setup the sqlcommand property in ODBC source and using it in a loop which should change the the command every iteration. It crashed after the first iteration ni matter what i tried.
Thanks,
I was having the same issue when using Oracle Source, updating the Attunity Connectors for Oracle as well as the OLEDB driver for SQL Server worked to fix the problem.
I have an interface which is a simple receiveport mapping sendport. The receiveport is the result of an add generated items query. The query just fetches some adres data from the database. This data does contain 'foreign' letters but when i run the query on Oracle SQL Developer, it works fine (gives me 12800 rows).
When BizTalk runs the query, it gives an ORA, which i assumed was an error the db gives to BizTalk am i wrong?
Where do i actually have to fix this problem? and How? Do i need to find out which character set is used on the database and use a convert in the query?
This is an error coming from Oracle - it's very unlikely that it's due to BizTalk or the WCF adapter. It indicates you have some corrupt data in your Oracle DB. You may not be getting the error in SQL Developer because SQL Developer is only returning the first ~50 rows by default (until you actually scroll down past them).
I'd use a strategy like this: http://vibhork.blogspot.com/2011/02/fix-of-ora-29275-partial-multibyte.html to try to find the bad data (e.g. page through the rows using ROWNUM until you find the row that's in error) - you could simulate that in SQL Developer by just scrolling down until you get the error (I think). If you can fix the data, fix it - if the data was put there by another source, you'll either have to get that source to stop putting invalid characters in there or you'll have to convert/concat the column(s) that is (are) causing problems, like:
SELECT problem_column || '' FROM table
or
SELECT CONVERT(COLUMN NAME,'NLS_CHARACTERSET','NLS_CHARACTERSET') FROM table
You might try SELECT CONVERT(COLUMN NAME, 'UTF8', 'US7ASCII') for example.
This query:
SELECT
DENSE_RANK() OVER (PARTITION BY UPPER(ANUMID), UPPER(PRODNUMID) ORDER BY DATE_ADDED ASC) AS DRANK
, ANUMID
, PRODNUMID
, STATUS_FDATE
, STATUS_XDATE
, ROWSTATUS
FROM
AGCOMN
The query ranks the rows in each group of ANUMID, PRODNUMID by DATE_ADDED from 1 to x. In a subsequent query the DRANK=1 gets the most recent row added.
This query works in ORACLE SQL Developer, and on my local machine SSIS environment, and on the SSIS OLE DB Source preview on the TEST server, but does NOT work when the package is ran.
ERROR:
[OLE DB Source 1 [677]] Error: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80040E14.
An OLE DB record is available. Source: "OraOLEDB" Hresult: 0x80040E14 Description: "ORA-00936: missing expression".
Environment:
Local - Windows XP, SQL BIDS 2008
Test - Windows7, SQL/BIDS 2008
I have since rewrote and simplified the query, grabbing the data into a temp table then use SQL server to rank and pare down the number of records.
Any ideas on finding the root cause of the SQL not working in the first place? And why the preview would work but running the package does not?
I have discovered the problem. There were embedded comments using the double dash in the middle of the SQL. When I removed them, the query worked. I removed them in my original posting when I "cleaned up" the query to post it to this public forum.
This is true in both 32-bit and 64-bit mode. I also removed the AS from the AS DRANK phrase, which did not make a difference in this case. Thanks for the ideas.
I am a developer with SQL Server experience. We have one legacy application which uses SQR and Oracle to perform a weekly duplicate record search. We got an error while performing this search after 14 years. It says 'ORA-01438: value larger than specified precision allows for this column'. When I googled that error, I found out that it is related to a numeric field and the value passed is larger than it can hold. I can increase the size but don't know for which one. Since no one supports Oracle here, I am trying to trouble shoot this error and found people using
alter system set events='1438 trace name Errorstack forever,level 10';
I would like to know if this is the right way to find out which sql is failing?
Also what does it alter and what is level 10? Anything that I should consider before running this query in production? Is there something I need to roll back after performing this query? I was told that if I do SQL> insert into test values (100000000000000000,'test','test'); where 10000000000000000 is invalid then it will throw generic Oracle message ORA-01438. But in the trace file, it would show ORA-01438: value larger than specified precision allowed for this column. So, where would the trace file be generated?
Current SQL statement for this session:
insert into test values (100000000000000000,'test','test').
Please let me know if I am not in the right path.
Use DBMS_MONITOR to enabling trace for the session affected. This will contain all SQL and errors and bind variables, if you enable it.
Post Update: I have tracked down the problem at the command "ExecuteNonQuery". That's the one that fails during an update or hangs during an insert. Trying a simple example using plain ADO.NET and their transactions works perfect. Also... it works great on my local home computer connection an Oracle Express edition. Pointing it again in some kind of server config??
It would be nice to step into the NHibernate code while debuging, but so far I'm still not able to set this up, even if I have rebuild the source and use those dll and pdb files. Was anyone able to do this before?
I've been scratching my head on this for a while now. I've been developing with NHibernate and an Oracle 10g database for a few days now, so far only using select statements which are all working great with the mapping.
I now started to implement my first insert (save) and update statements, but the tests all fail.
They all fail on the transaction.commit() part.
When performing an INSERT (Save), the code reaches transaction.commit(), but then got "stucked". The test keeps on running without moving forward.
This is the output of the test (note that the test keeps running)
NHibernate: select hibernate_sequence.nextval from dual
NHibernate: INSERT INTO MOB_PL_MAPPING_TEST (DES, TEST_ID) VALUES (:p0, :p1);:p0 = 'This is a test!', :p1 = 161
When performing an UPDATE, the transaction.commit() fails and I receive following error stack:
NHibernate: SELECT test0_.TEST_ID as TEST1_10_0_, test0_.DES as DES10_0_ FROM MOB_PL_MAPPING_TEST test0_ WHERE test0_.TEST_ID=:p0;:p0 = 61
NHibernate: UPDATE MOB_PL_MAPPING_TEST SET DES = :p0 WHERE TEST_ID = :p1;:p0 = 'Changed!', :p1 = 61
TestCase 'Data.Tests.Test_Update_on_Test_Table'
failed: NHibernate.TransactionException : Rollback failed with SQL Exception
----> System.InvalidOperationException : This OracleTransaction has completed; it is no longer usable.
c:\CSharp\NH\nhibernate\src\NHibernate\Transaction\AdoTransaction.cs(260,0): at NHibernate.Transaction.AdoTransaction.Rollback()
E:\SubVersion\Application\Src\Data\UnitOfWork\Data.UnitOfWork\GenericTransaction.cs(26,0): at Data.UOW.GenericTransaction.Rollback()
E:\SubVersion\Application\Src\Data\UnitOfWork\Data.UnitOfWork\UnitOfWorkImplementor.cs(49,0): at Data.UOW.UnitOfWorkImplementor.TransactionFlush(IsolationLevel isolationLevel)
E:\SubVersion\Application\Src\Data\UnitOfWork\Data.UnitOfWork\UnitOfWorkImplementor.cs(36,0): at Data.UOW.UnitOfWorkImplementor.TransactionFlush()
E:\SubVersion\Application\Src\Data\Data.Tests\Repositories\LoyaltyRepositoryTests.cs(159,0): at Data.Tests.Test_Update_on_Test_Table()
--InvalidOperationException
at System.Data.OracleClient.OracleTransaction.AssertNotCompleted()
at System.Data.OracleClient.OracleTransaction.Rollback()
c:\CSharp\NH\nhibernate\src\NHibernate\Transaction\AdoTransaction.cs(246,0): at NHibernate.Transaction.AdoTransaction.Rollback()
I must say I'm unknown to oracle, but it seems that establishing the transaction causes the problem. Though the same code (using transactions) for a select statement (GET) works fine.
Could this be an oracle config problem (blocking insert/update transactions) or do I have to configure something else at application level?
Can anybody help me out here or shed more light on the problem that may occure?
Thanks in advance.
After managing to hook up the NHibernate code to my debuger, I was able to step through the code up to the point where the Command object is executed.
There, the problem was to be found in the parameter types. Parameters that where a string had the type set to "String", where they where suposed to be "AnsiString".
Turned out, I already ran into this article when I was mapping a string to an id
http://www.jameskovacs.com/blog/NHibernateAndTheCaseOfTheCrappyOracleErrorMessage.aspx
but didn't thought more of it.
Either way, adding the type to each string property in the mapping resolved the problem.
<property name="Description" column="DES" type="AnsiString" />
A hectic 3 days... but it's solved :D