Azure Sql Database - how can I tell what is blocking my IO - performance

Is there a way to 'defrag' a table in Azure Sql or to understand why a table scan is so slow? I'm using an Azure Sql Database to drive an SSAS Tabular Model. My source table has ~30M rows and is currently reading/processing only 1M rows/hour, with 15M rows it was processing 1M/minute. I'm using sp_whoisactive but for the query that's driving the Tabular Process I see IO blocking and no CPU/IO values. Other queries which don't need a table scan run ok.
Thanks for your help

Related

The memory usage in oracle server when using jdbc setfetchsize

When I use setFetchSize() method for a select statement for example
select * from tablename
for a large table in oracle JDBC driver. It actually limit the memory usage in the JDBC client.
However, what I am curious is that will this statement cause oracle server stores all the rows in the server memory ignoring the fetch size which leads to an OutOfMemory error on the Oracle Server?
However, what I am curious is that will this statement cause oracle server stores all the rows in the server memory ignoring the fetch size which leads to an OutOfMemory error on the Oracle Server?
No, Oracle, processing the cursor(select), will not get all the rows of a particular table at once in memory.
Oracle has a complex and secure architecture.
Oracle has a number of criteria for evaluating a table : "large" or "small".
When using the cursor normally (sql engine), it will not be possible to get OutOfMemory on the server process.
For example, if your server-side code processes data through pl / sql collections, you can get data from your server process without specifying the limit for retrieving rows, and if the server process reaches the PGA limit(PGA_AGGREGATE_LIMIT), the process will crash(after all resources occupied by the process will be freed).
This theme is not simple, from the point of view of explaining the mechanism of the database in one post)
If there is an interest to understand in more detail, then I think the following links may be useful.
Additional links:
SQL Processing
Working with Cursors
Oracle Relational Data Structures
Oracle Data Access
Oracle Database Storage Structures
Process Architecture

Can we do real time replication based on a condition on column in Oracle SQL?

Currently we are using Oracle SQL and replicating data. After doing some analysis we found out that we are not using most of the data that we are replicating. To optimize this we thought to replicate data based on column conditions. Can this be achieved in Oracle DB? If so what strategy can we use here?

data copy from oracle to postgres using hibernate

Am new to hiberante JPA. I am working on oracle to postgres migration and we are not using aws dms service for data migration. We would like to move ahead with Java for copying tables which have more than 1 million records. I have problem for below scenario.
Table A - Oracle
Table B - PostGres
Am extracting records from Oracle using ScrollableResults. Once i have the data from Oracle, i need to loop up a value in postgres database for data from Oracle before performing insert into postgres database.
I thought first #ColumnTransformer will help but it is not helping as i dont know how to reference data from oracle on ColumnTransformer expression.
So finally went ahead with writing normal insert query with values and subquery for lookup. Also set hibernate.jdbc.batch_size to 100.
I executed the program in this way and it took 5 mins for 10k records which i feel is slow.
is there any other solution for this problem to improve the performance.
Thanks for all your help
I found the solution. I solved it by storing postgres lookup table in list object then performing search in lookup table list object before performing insert. Now the speed is good.

SSIS - Iterating with SQL Server Data in ForEachLoop to Dataflow with Oracle Backend and Inserting Results to SQL Server

Hey EXPERIENCED SSIS DEVELOPERS, I need your help.
High-Level Requirements
Query SQL Server table (on a different server than my SSIS server) resulting in about 200-300k records results set.
Use three output colums for each row to lookup date in Oracle database.
Insert or Update SQL Server table with results.
Use SSIS.
SQL Server 2008
Sounds easy, right?
Here is what I have done:
Created on Control Flow Execute SQL Task that gets a recordset from SQL Server. Very fast, easy query, like select field1, field2, field 3 from table where condition > 0. That's it. Takes less than a second.
Created a variable (evaluated as expression) for the Oracle query that uses the results set from the above in the WHERE clause.
Created a ForEachLoop Container that takes the results (from #1 above) for each row in the recordset and runs it through a Data Flow that uses the Oracle query (from #2 above) with Data access mode: SQL command from variable against an Oracle data source. Fast, simple query with only about 6 columns returned.
Data Conversion - obvious reasons - changing 3 columns from Oracle data types to SQL Server data types.
OLE DB Destination to insert to SQL Server using Fast Load to staging table.
It works perfectly! Hooray! Bad news - it is very, very slow. When I say slow, I mean it process 3000 records per hour. Holy moly - so freaking slow.
Question: am I missing a way to speed it up? It seems like the ForEachLoop Container is the bottleneck. Growl.
Important Points:
- I have NO write access in Oracle environment, so don't even suggest a potential solution that requires it. Not a possibility. At all.
Oracle sources do not allow for direct parameter definition. So no SELECT FIELD FROM TABLE WHERE ?. Don't suggest it - doesn't work.
Ideas
- Should I find a way to break down the results of the Execute SQL task and send them through several ForEachLoop Containers for faster processing?
Is there another design that is more appropriate?
Is there a script I can use that is faster?
Would it be faster to create a temporary table in memory and populate it - then use the results to bulk insert to SQL Server? Does this work when using an Oracle data source?
ANY OTHER IDEAS?

Migrate / Copy Huge data from Oracle to SQL Server

I have huge tables in oracle database, appx 1 crore+ (10 million+) rows and want to migrate / copy those table and data in sql server.
Currently I am using Import functionality of SQL Server for this process. But it takes a day for this process and takes too much time.
Is there any better way? Any good outcome or step (SSIS, Any other functional step) to follow for this process?

Resources