Pull Data from Hive to SQL Server without duplicates using Apache Nifi - apache-nifi

Sorry I'm new in Apache Nifi. So i made a data flow regarding pulling data from Hive and storing it in SQL. There is no error on my data flow, the only problem is, its pulling data repeatedly.
My Data flow is consists of the following:
SelectHiveQL
SplitAvro
ConvertAvroToJson
ConvertJsonTOSQL
PutSQL
For example my table in hive have 20 rows only but when i run the data flow and check my table in MS SQL. It saved 5,000 rows. The SelectHiveQL pulled the data repeatedly.
What do i need to do so it will only pull 20 rows or just the exact number of rows in my Hive Table?
Thank you

SelectHiveQL (like many NiFi processors) runs on a user-specified schedule. To get a processor to only run once, you can set the run schedule to something like 30 sec, then start and immediately stop the processor. The processor will be triggered once, and stopping it does not interrupt that current execution, it just causes it not to be scheduled again.
Another way might be to set the run schedule to something very large, such that it would only execute once per some very long time interval (days, years, etc.)

Related

Looking for an Equivalent of GenerateTableFetch

I use ExecuteSQLRecord to run a query and write to CSV format. The table has 10M rows. Although I can split the output into multiple flow files, the query is executed by only a single thread and is very slow.
Is there a way to partition the query into multiple queries so that the next processor can run multiple concurrent tasks, each one process one partition? It would be like:
GenerateTableFetch -> ExecuteSQLRecord (with concurrent tasks)
The problem is that GenerateTableFetch only accepts table name as input. It does not accept customized queries.
Please advise if you have solutions. Thank you in advance.
You can increase the concurrency on Nifi processors (by increase the number in Councurrent Task), you can also increase the throughput, some time it works :
Also if you are working on the cluster, before the processor, you can apply load balancing on the queue, so it will distribute the workload among the nodes of your cluster (load balance strategy, put to round robin):
Check this, youtube channel, for Nifi antipatterns (there is a video on concurrency): Nifi Notes
Please clarify your question, if I didn't answer it.
Figured out an alternative way. I developed a Oracle PL/SQL function which takes table name as an argument, and produces a series of queries like "SELECT * FROM T1 OFFSET x ROWS FETCH NEXT 10000 ROWS ONLY". The number of queries is based on the number of rows of the table, which is a statistics number in the catalog table. If the table has 1M rows, and I want to have 100k rows in each batch, it will produces 10 queries. I use ExecuteSQLRecord to call this function, which effectively does the job of NiFi processor GenerateTableFetch. My next processor (e.g. ExecuteSQLRecord again) can now have 10 concurrent tasks working in parallel.

How to run a processor only when ahother proccessor has finished its execution?

I'm migrating a table (2 millions of rows) from DB2 to SQL Server. I'm using the next flow:
ExecuteSQL (to select records from the Db2 table).
SplitAvro (to split the records. I configured it with Output Size = 1 to control the case that if one fails the rest is inserted without problems.
PutDataBaseRecord (to insert the records in the SQL Server table).
ExecuteSQL (I need to call a stored procedure that executes update sentences against the same table that PutDataBaseRecord is working to).
The problem is the second ExecuteSQL is running before PutDataBaseRecord complete the insertion of all records.
How can I tell nifi to run that processor only when the other one finishes?
Thanks in advance!
After PutDatabaseRecord you can use MergeContent in Defragment mode to undo the split operation performed by SplitAvro. This way a single flow file will come out of MergeContent only when all splits have been seen, and at that point you know its time to for the second ExecuteSQL to run.
The answer provided by #bryan-bende is great, as it is simple and elegant. If that doesn't work for some reason, you could also look at Wait/Notify. Having said that, Bryan's answer is simpler and probably more robust.

Nifi Fetching Data From Oracle Issue

I am having a requirement to fetch data from oracle and upload into google cloud storage.
I am using executeSql proecssor but it is failing for large table and even for table with 1million records of approx 45mb size it is taking 2hrs to pull.
The table name are getting passed using restapi to listenHttp which passes them to executeSql. I cant use QueryDatabase because the number of table are dynamic and calls to start the fetch is also dynamic using a UI and Nifi RestUi.
Please suggest any tuning parameter in ExecuteSql Processor.
I believe you are talking about having the capability to have smaller flow files and possibly sending them downstream while the processor is still working on the (large) result set. For QueryDatabaseTable this was added in NiFi 1.6.0 (via NIFI-4836) and in an upcoming release (NiFi 1.8.0 via NIFI-1251) this capability will be available for ExecuteSQL as well.
You should be able to use GenerateTableFetch to do what you want. There you can set the Partition Size (which will end up being the number of rows per flow file) and you don't need a Maximum Value Column if you want to fetch the entire table each time a flow file comes in (which also allows you do handle multiple tables as you described). GenerateTableFetch will generate the SQL statements to fetch "pages" of data from the table, which should give you better, incremental performance on very large tables.

How does PutHiveQL works on batch?

I am trying to input multiple insert statements to PutHiveQL via ReplaceText processor. Each insert statement is a flowfile coming out from ReplaceText. I set the batch in PutHiveQL to 100. However, it seems it still sends it 1 flowfile at a time. How to best implement this batch?
I don't think the PutHiveQL processor batches statements at the JDBC layer as you expect, not in the way that processors like PutSQL do. From the code, it looks like the Batch Size property is used to control how many flowfiles the processor works on before yielding, but the statements for each flowfile are still executed individually.
That might be a good topic for a NiFi feature request.
The version of Hive supported by NiFi doesn't allow for batching/transactions. The Batch Size parameter is meant to try to move multiple incoming flow files a bit faster than having the processor invoked every so often. So if you schedule the PutHiveQL processor for every 5 seconds with a Batch Size of 100, then every 5 seconds (if there are 100 flow files queued), the processor will attempt to process those during one "session".
Alternatively you can specify a Batch Size of 0 or 1 and schedule it as fast as you like; unfortunately this will have no effect on the Hive side of things, as it auto-commits each HiveQL statement; the version of Hive doesn't support transactions or batching.
Another (possibly more performant) alternative is to put the entire set of rows as a CSV file into HDFS and use the HiveQL "LOAD DATA" DML statement to create a table on top of the data: https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-DMLOperations

Apache Nifi ExecuteSQL Processor

I am trying to fetch data from oracle database using ExecuteSQL processor.I have some queries like suppose there are 15 records in my oracle database.Here when I run the ExecuteSQL processor,it will run continuously as a streaming process and store the whole records as a single file in HDFS and repeatedly do the same.Thus many files will be there in the hdfs location which will fetch the already fetched records from oracle db and these files contains the same data.How can i make this processor to run in such a way that it must fetch all the data from oracle db once and store as a single file and when ever new records is inserted into the db,it must ingest those to hdfs location?
Take a look at the QueryDatabaseTable processor:
https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi.processors.standard.QueryDatabaseTable/index.html
You will need to tell this processor one or more columns to use to track new records, this is the Maximum Value Columns property. If your table has a one-up id column you can use that, and every time it runs it will track the last id that was seen, and start there on the next execution.

Resources