Slow in filling .NET DataSet from Oracle 9i - oracle

This is a problem that my friend asked over the phone. The C# 3.5 program he has written is filling a Dataset from a Patient Master table which has 350,000 records. It uses the Microsoft ADO.NET driver for Oracle. The ExecuteQuery method takes over 30 seconds to fill the dataset. However, the same query (fetching about 20K records) takes less than 3 second in Toad . He is not using any Transactions within the program. It has an index on the column (Name) which is being used to search.
These are some alternatives i suggested :-
1) Try to use a Data Reader and then populate a Data table and pass it to the form to bind it to the Combo box (which is not a good idea since it is likely to take same time)
2) Try Oracles' ADO.NET Driver
3) Use Ants profiler to see if you can identify any particular ADO.NET line.
Has anyone faced similar problems and what are some ways of resolving this.
Thanks,
Chak.

You really need to do an extended SQL trace to see where the slowness is coming from. Here is a paper from Cary Millsap (of Method R and formerly of Hotsos) that details doing this:
http://method-r.com/downloads/doc_details/10-for-developers-making-friends-with-the-oracle-database-cary-millsap

Toad would typically only fetch the first x rows (500 in my setup). So double check if the comparison is valid.
Then you should try to seperate the db stuff from the form stuff if possible to see if the db is taking up the time.
If that's the case, try the Oracle libraries if that is any faster, we've seen 50% improvements between the latest Oracle driver and the standard Microsoft driver.

Without knowing the actual code he uses to accomplish his tasks and not knowing the number of rows he's actually fetching (I'm hoping he doesn't read all 350K of them?) it's impossible to say anything that's gonna help him.
Have him add a code snippet to the question for clarity.

Related

ORACLE - Which is better to generate a large resultset of records, View, SP, or Function

I recently working with Oracle database to generate some reports. What I need is to get result sets of specific records (only SELECT statement), sometimes are large records, to be used for generating the report in excel file.
At first, the reports are queried in Views but some of them are slow (have some complex subqueries). I was asked to increase the performance and also fixed some field mapping. I also want to tidy things up, because when I query against View, I must specifically call the right column name. I want to separate the data works into database, and the web app just for passing parameters and call the right result set.
I'm new to Oracle, so which is better to do this kind of task? Using SP or Function? or in what condition that maybe View is better?
Makes no difference whether you compile your SQL in a view, SP or function. It is the SQL itself that matters.
As long as you are able to meet your requirements with the views they should be a good option. If you intend to break-up your queries into multiple ones for achieving better performance then you should go for stored procedures. If you decide to go for stored procedure then it would be advisable to create a package and bundle all the stored procedures together in the package. If your problem is performance then there may not be a silver bullet solution for the same. You will have to work on your queries and design for the same.
If the problem is performance due to complex SELECT query (queries), you can consider tuning the queries. Often you will find queries written 15-20 years ago, which do not use functionality and techniques that were introduced by Oracle in more recent versions (even if the organization spent the big bucks to buy the more recent versions - making it into a waste of money). Honestly, that may be too much of a task for you if you are new at Oracle; also, some slow queries may have been written by people just like you, many years ago - before they had a chance to learn a lot about Oracle and have experience with it.
Another thing, if the reports don't need to use the absolute current state of the underlying tables (for example, if "what was in the tables at the end of the business day yesterday" is acceptable), you can create a materialized view. It will not work any faster than a regular view, but it can run overnight (say), or every six hours, or whatever - so that the further reporting processing from there will not have to wait for the queries to complete. This is one of the main uses of materialized views.
Good luck!

Query duration in NHibernate Profiler

I have an ASP .Net MVC application which uses Fluent NHibernate to access an oracle database. I also use NHibernate Profiler for monitoring the queries generated by NHibernate. I have one query which is really simple (selecting all rows from a table with 4 string columns). It is used for creating a report in CSV format. My problem is that the query is taking very long to run, and I would like to get a bit more insight into the durations displayed by nhprof. With 65.000 rows, it is taking 10-20 seconds, even though the "Database only" duration only shows something like 20 ms. Network lag should not make out a lot of this time, because the servers are on the same gigabit LAN. I don't expect people to be able to pinpoint for me exactly where the bottleneck is, but what I would like to know is some more details about how to read the duration measurements in NHibernate profiler.
What is included in the "Database only" part, and what is included in the "Total time"? Does the total time also include the processing done after populating the C# objects, so that this time is actually for the entire http request? Knowing more about this would hopefully make me able to eliminate some factors.
This what the NHibernate mapping class looks like:
Table("V_TICKET_DETAILS");
CompositeId()
.KeyProperty(x => x.TicketId, "TICKET_ID")
.KeyProperty(x => x.Key, "COLUMN_NAME")
.KeyProperty(x => x.Parent, "PARENT_NAME");
Map(x => x.Value, "COLUMN_VALUE");
And the query generated by nh profiler is like this:
SELECT this_.TICKET_ID as TICKET1_35_0_,
this_.COLUMN_NAME as COLUMN2_35_0_,
this_.PARENT_NAME as PARENT3_35_0_,
this_.COLUMN_VALUE as COLUMN4_35_0_
FROM V_TICKET_DETAILS this_
The view is really simple, only joining two tables on a 2-digit integer.
I am by no means a database expert, so I would be happy for all comments that would point me in the correct direction.
The total time is for the call to the nHib query only.
However, it includes, in addition to the time in the db, the time it takes nHib to populate your entities (hydration). and that's likely your culprit.
I've had a similar problem, perhaps some of the suggestions there may help you.
The bottom line is that nHib is not really intended to load large datasets.
If none of the suggestions I got helped you, I would suggest a couple of things:
1. It's unlikely that your user needs to view 65,000 rows of data at the same time. perhaps you can find a way to filter the data so that the result set is smaller (and more readable).
2. otherwise- if it's, as you say, an 'special' case that only occurs when you generate a report- you don't have to use nHib. you can just use, say, good ol' ADO.Net classes...
there is also IStatelessSession which is intended for such situations. It doesnt have a session cache and saves a lot of work. It should be a lot faster.
using (var session = factory.OpenStatelessSession())
{
}

how to reduce the database's pressure

I have a database(sql server 2005),now there are about 100000 records in the table called users, when I do query use linq to sql, it is very slower and slower.how can I do some operate to improve the speed?
Analyse your query and add some indexes to your table may help.
To get a more specific answer post more specific information (table stucture, indexes you have, the sql code L2S generates, ...)
You could (in order of preference)
Save your query as a stored procedure
Add indexes to your users
table, for what you are querying for/sorting for
Analyze your query
(if it is complicated), see if there's a less-resource-intensive way
of doing it. There are graphical query analyzers to help you.
As a last resort, not use LINQ, but instead ADO.NET Entity Framework, it's significantly faster. But you'll only see performance improvements for crazy stuff, and only if you've already done all of the above.
Use stored procedures and then use linq to sql to get the desired rows, this will give performance.
The best tools at your disposal for analyzing your database access and seeing what needs to be optimized are:
SQL Server Profiler
Graphical Execution Plans
The first one will allow you to see the exact queries being sent to your database from your application, which is especially useful if it turns out that your application is chattier than you think. The second one will allow you to take those queries and see exactly what the SQL server is doing with them.
In the graphical execution plan, look for steps which use a lot of CPU and paths which transfer a lot of records. Those are what you'll want to optimize. It's possible that you're doing a table scan somewhere, which is slow, or maybe joining on many more records than you need somewhere, which is slow, etc.

Best strategy for retrieving large dynamically-specified tables on an ASP.NET page

Looking for a bit of advice on how to optimise one of our projects. We have a ASP.NET/C# system that retrieves data from a SQL2008 data and presents it on a DevExpress ASPxGridView. The data that's retrieved can come from one of a number of databases - all of which are slightly different and are being added and removed regularly. The user is presented with a list of live "companies", and the data is retrieved from the corresponding database.
At the moment, data is being retrieved using a standard SqlDataSource and a dynamically-created SQL SELECT statement. There are a few JOINs in the statement, as well as optional WHERE constraints, again dynamically-created depending on the database and the user's permission level.
All of this works great (honest!), apart from performance. When it comes to some databases, there are several hundreds of thousands of rows, and retrieving and paging through the data is quite slow (the databases are already properly indexed). I've therefore been looking at ways of speeding the system up, and it seems to boil down to two choices: XPO or LINQ.
LINQ seems to be the popular choice, but I'm not sure how easy it will be to implement with a system that is so dynamic in nature - would I need to create "definitions" for each database that LINQ could access? I'm also a bit unsure about creating the LINQ queries dynamically too, although looking at a few examples that part at least seems doable.
XPO, on the other hand, seems to allow me to create a XPO Data Source on the fly. However, I can't find too much information on how to JOIN to other tables.
Can anyone offer any advice on which method - if any - is the best to try and retro-fit into this project? Or is the dynamic SQL model currently used fundamentally different from LINQ and XPO and best left alone?
Before you go and change the whole way that your app talks to the database, have you had a look at the following:
Run your code through a performance profiler (such as Redgate's performance profiler), the results are often surprising.
If you are constructing the SQL string on the fly, are you using .Net best practices such as String.Concat("str1", "str2") instead of "str1" + "str2". Remember, multiple small gains add up to big gains.
Have you thought about having a summary table or database that is periodically updated (say every 15 mins, you might need to run a service to update this data automatically.) so that you are only hitting one database. New connections to databases are quiet expensive.
Have you looked at the query plans for the SQL that you are running. Today, I moved a dynamically created SQL string to a sproc (only 1 param changed) and shaved 5-10 seconds off the running time (it was being called 100-10000 times depending on some conditions).
Just a warning if you do use LINQ. I have seen some developers who have decided to use LINQ write more inefficient code because they did not know what they are doing (pulling 36,000 records when they needed to check for 1 for example). This things are very easily overlooked.
Just something to get you started on and hopefully there is something there that you haven't thought of.
Cheers,
Stu
As far as I understand you are talking about so called server mode when all data manipulations are done on the DB server instead of them to the web server and processing them there. In this mode grid works very fast with data sources that can contain hundreds thousands records. If you want to use this mode, you should either create the corresponding LINQ classes or XPO classes. If you decide to use LINQ based server mode, the LINQServerModeDataSource provides the Selecting event which can be used to set a custom IQueryable and KeyExpression. I would suggest that you use LINQ in your application. I hope, this information will be helpful to you.
I guess there are two points where performance might be tweaked in this case. I'll assume that you're accessing the database directly rather than through some kind of secondary layer.
First, you don't say how you're displaying the data itself. If you're loading thousands of records into a grid, that will take time no matter how fast everything else is. Obviously the trick here is to show a subset of the data and allow the user to page, etc. If you're not doing this then that might be a good place to start.
Second, you say that the tables are properly indexed. If this is the case, and assuming that you're not loading 1,000 records into the page at once and retreiving only subsets at a time, then you should be OK.
But, if you're only doing an ExecuteQuery() against an SQL connection to get a dataset back I don't see how Linq or anything else will help you. I'd say that the problem is obviously on the DB side.
So to solve the problem with the database you need to profile the different SELECT statements you're running against it, examine the query plan and identify the places where things are slowing down. You might want to start by using the SQL Server Profiler, but if you have a good DBA, sometimes just looking at the query plan (which you can get from Management Studio) is usually enough.

Subsonic Performance, can it handle up to 1million song information?

I did notice the statement on Subsonics page that it can handle 100000+ files, but we would need to handle information for up to 1 million songs. Do we know what the 100000 limitation is from -- is it based on database speed, hard drive capacity, or is that simply all that it has been tested with?
Could you share some proven examples about this?
Did the statement you saw refer to files or tables?
I do recall a statement that went along the lines that sonic could process 1,000's of tables, but you will be waiting a while. This refers to the process of building the classes and has nothing to do with processing of records.
In my experience, and speaking very generally, 1 million rows is a relatively small database. But its not the size, its the way you use it and when it comes to databases if you use it the wrong way, you can bring a small database on a fast server to its knees. I'd have no hesitation using Subsonic to access a table containing a million rows, but as for a proven example, Im not sure what you are asking for.
There are already a number of questions that discuss SubSonic performance you should probably read through those:
Using Subsonic for potentially heavily accessed ASPNET MVC Application
https://stackoverflow.com/questions/146087/best-performing-orm-for-net
https://stackoverflow.com/questions/380620/what-object-mapper-solution-would-you-recommend-for-net-closed
Rob Connery has also written a blog post on SubSonic performance that it would be worth you reading:
http://blog.wekeroad.com/blog/subsonic-scaling/
For what it's worth in my experience SubSonic would have no trouble handling a table with a million rows.
This question goes back to what SubSonic is and how SubSonic works. SubSonic is more than just an ORM(Object Relational Mapper). SubSonic is an ORM with awesome Query Builder and some helpful web controls to get you up and going in no time. If you have say 1 million records in a table you are never going to want to do a
Select * From GinormousSongsTable
. It would take for ever for your database to return that many rows. More Realisticaly you are going to want to do something like this
Select Top 50 * FROM GinormousSongsTable WHERE catagory = 'Rock'
This is Where SubSonic will save you loads of time. SubSonic can create queries that will handle paging or the top functionality or whatever else you are looking for. If you want you can return the 50 Records as a GinormousSongsTableCollection so that now you have the advantages of strongly typed object or if you need the raw speed of a DataReader then you can return the Query as a DataReader and have the same native speed as if you went to all of the trouble and created your own Connection, Command, Parameters etc. SubSonic Scales well and lets you do what you need to.

Resources