How to ingest data in real time from oracle to Elasticsearch - oracle

I am using a loop in scala to query an Oracle table every 10 second, since Oracle table get continuously insertion. I create a select request then I create n json string containing n line from oracle that I push into Elasticsearch. After that I create a delete request to erase the n line from Oracle table that I have inserted into ES. I developped a completely beginner approach. So can you suggest me a better approach to load in real time or micro batch data from Oracle to ES and delete from Oracle. I heard about logstach or SreamSets. Do you have any idea? Thanks

Related

Quickest way to copy over table data in oracle/postgresql

I'm looking at a spring boot application that is used to copy data from temp to permanent table based on last updated date. It copies only if last updated date is greater than desired date, so not all records are copied over. Currently the table has around 300K+ records and the process with spring JPA is taking over 2 hours (for all of them) and is not at all feasible. The goal is to bring it down to under 15 mins maximum. I'm trying to see how much difference using JDBCtemplate would bring in. Would a pl/sql script be a better option? Also wanted to see if there are better options out there. Appreciate your time.
Using oracle database at the moment but postgresql migration is on the cards.
Thanks!
You can do this operation with a straight SQL query (which will work on Oracle or PostgreSQL). Assuming your temp_table has the same columns as the permanent table, the last updated date column is called last_updated and you wanted to copy all records updated since 2020-05-03 you could write a query like:
INSERT INTO perm_table
SELECT *
FROM temp_table
WHERE last_updated > TO_DATE('2020-05-03', 'YYYY-MM-DD')
In your app you would pass '2020-05-03' via a placeholder either directly or via JdbcTemplate.

Does Elasticsearch performs full table scan on my Oracle table everytime when Logstash is run?

I wanted to know if the Elasticsearch performs the full table scan on my Oracle table if I try to ingest that table's delta data using Logstash
Elasticsearch doesn't interact with your database, it's Logstash that runs queries on your database. Whether Logstash scans the entire table depends on the query itself and the scanned table indexes. Most queries run from Logstash look similar to this:
SELECT * FROM TABLE WHERE FIELD_FOR_DELTA > :sql_last_value;
If FIELD_FOR_DELTA doesn't have an index then Oracle will search through all records to find entries satisfying the condition. But when FIELD_FOR_DELTA has an index then Oracle will either search through small portion of the table or will only check the record with highest value and finish the query if the value was equal or smaller. If you don't have an index for this field in your table then you should consider it, because of potentially improved query performance and lowered DB impact from Logstash.

data copy from oracle to postgres using hibernate

Am new to hiberante JPA. I am working on oracle to postgres migration and we are not using aws dms service for data migration. We would like to move ahead with Java for copying tables which have more than 1 million records. I have problem for below scenario.
Table A - Oracle
Table B - PostGres
Am extracting records from Oracle using ScrollableResults. Once i have the data from Oracle, i need to loop up a value in postgres database for data from Oracle before performing insert into postgres database.
I thought first #ColumnTransformer will help but it is not helping as i dont know how to reference data from oracle on ColumnTransformer expression.
So finally went ahead with writing normal insert query with values and subquery for lookup. Also set hibernate.jdbc.batch_size to 100.
I executed the program in this way and it took 5 mins for 10k records which i feel is slow.
is there any other solution for this problem to improve the performance.
Thanks for all your help
I found the solution. I solved it by storing postgres lookup table in list object then performing search in lookup table list object before performing insert. Now the speed is good.

How to convert CONNECT BY in greenplum

Can anyone suggest how to convert CONNECT BY Oracle query into Greenplum. Greenplum doesn't support recursive queries. So, we can not use WITH RECURSIVE. Is there any alternate solution to re-write the below query.
SELECT child_id, Parnet_id, LEVEL , SYS_CONNECT_BY_PATH (child_id,'/') as HIERARCHY
FROM pathnode
START WITH Parnet_id = child_id
CONNECT BY NOCYCLE PRIOR child_id = Parnet_id;
There are ways to do this but it will be a one-off per query. You will need to create a function that loops through your pathnode table and "return next" to return each row. You can search on this site to find examples of doing this with PostgreSQL 8.2.
Work is happening to rebase Greenplum to PostgreSQL 8.3, 8.4, and so on. Those later PostgreSQL versions support "with recursive" which is the ANSI SQL way to write your SQL but Greenplum doesn't support it yet. When it does get supported by Greenplum, I don't think it will perform all that well. The query will force looping and individual row lookups. This works great in an OLTP database but not so well for an MPP database.
I suggest you transform your data in Oracle with a VIEW and then just dump the view to a file to load into Greenplum. The DDL of having a self-referencing, N-level table will never be a good idea in an MPP database.

SSIS - Iterating with SQL Server Data in ForEachLoop to Dataflow with Oracle Backend and Inserting Results to SQL Server

Hey EXPERIENCED SSIS DEVELOPERS, I need your help.
High-Level Requirements
Query SQL Server table (on a different server than my SSIS server) resulting in about 200-300k records results set.
Use three output colums for each row to lookup date in Oracle database.
Insert or Update SQL Server table with results.
Use SSIS.
SQL Server 2008
Sounds easy, right?
Here is what I have done:
Created on Control Flow Execute SQL Task that gets a recordset from SQL Server. Very fast, easy query, like select field1, field2, field 3 from table where condition > 0. That's it. Takes less than a second.
Created a variable (evaluated as expression) for the Oracle query that uses the results set from the above in the WHERE clause.
Created a ForEachLoop Container that takes the results (from #1 above) for each row in the recordset and runs it through a Data Flow that uses the Oracle query (from #2 above) with Data access mode: SQL command from variable against an Oracle data source. Fast, simple query with only about 6 columns returned.
Data Conversion - obvious reasons - changing 3 columns from Oracle data types to SQL Server data types.
OLE DB Destination to insert to SQL Server using Fast Load to staging table.
It works perfectly! Hooray! Bad news - it is very, very slow. When I say slow, I mean it process 3000 records per hour. Holy moly - so freaking slow.
Question: am I missing a way to speed it up? It seems like the ForEachLoop Container is the bottleneck. Growl.
Important Points:
- I have NO write access in Oracle environment, so don't even suggest a potential solution that requires it. Not a possibility. At all.
Oracle sources do not allow for direct parameter definition. So no SELECT FIELD FROM TABLE WHERE ?. Don't suggest it - doesn't work.
Ideas
- Should I find a way to break down the results of the Execute SQL task and send them through several ForEachLoop Containers for faster processing?
Is there another design that is more appropriate?
Is there a script I can use that is faster?
Would it be faster to create a temporary table in memory and populate it - then use the results to bulk insert to SQL Server? Does this work when using an Oracle data source?
ANY OTHER IDEAS?

Resources