I am stress testing a database table
I am looking for any software that can connect to my database and show me some metrics like no of rows in a table, time for inserts , inserts/time, table fragmentation[logical/physical] etc .
It would be great if the reporting tool can do the following:
1] Report in real time or atleast after some interval so that I do not have to wait for test to finish to get first look at the data
2] Ability to do stuff with the data later, like get 99.99 percentile, avg etc.
Is mostly freely available :)
Does anyone have any suggestion of something I can use with my Oracle table. Any pointers would be great.
I can actually write scripts to logg stuff like select count(*) etc .. but then I will have to spend a lot of time parsing and changing the data reporting rather than the tests.
I think some intelligent thing might already be out there ??
Thanks
Edit:
I am looking at a piece of design for
a new architecture
The tests are
"comparison" tests for different
designs and hence as far as I do it
on same hardware and same schema etc
they are comparable to some
granularity.
I want to monitor index
fragmentation, and response times
etc.
If you think there are other
things that can change please let me
know. I am trying to roll back the
table to particular state[basically
truncate] for each new iteration of
the test
First, Oracle has built-in functionality for telling you the number of rows in a table (either use count(*) or search 'gather statistics oracle' for another option).
But "stress testing a table" sounds to me like you're going down the wrong path. Most of the metrics you're mentioning ("time for inserts , inserts/time, table fragmentation[logical/physical] etc") are highly dependent on many factors:
what OS Oracle's running on
how the OS is tuned (i.e. other services running)
how the specific Oracle instance is configured
what underlying storage architecture Oracle's using (and how tablespaces are configured)
what other queries are being executed in the database at the exact same time as your test
But NONE of them would be related to the table design itself.
Now, if you're wondering if your normalized (or de-normalized) table schema is hurting your application, that's another matter. As is performance being degraded by improper/unneeded/missing indexes, triggers, or a host of other problems.
But if you really want an app that will give you real-time monitoring, check out Quest Software's Spotlight on Oracle. But it's definitely not free.
Just to add to the other comments, I believe what you really want is to stress test the queries you're running and not the table. The table is just a bunch of data blocks on a disk and the query is what will make the difference in performance as far as development is concerned. That will tell you if you need different indexes or need to redesign the query.
On the other hand, if you're looking at it as a DBA or system administrator, you're probably more interested in OS level statistics especially disk latency, memory paging, and CPU utilization.
All this is available in the enterprise manager which is my primary tuning tool for development and DBA. If you don't have that, read up on using sql_trace to profile your queries and your OS specific documentation on how to get those stats.
Related
i'm struggeling with Performance in oracle. Situation is: Subsystem B has a dblink to master DB A. on System B a query completes after 15 seconds over dblink, db plan uses appropriate indexes.
If same query should fill a table in a stored procedure now, Oracle uses another plan with full scans. whatever i try (hints), i can't get rid of these full scans. that's horrible.
What can i do?
The Oracle Query Optimizer tries 2000 different possibilities and chooses the best one in normal situations. But if you think it choose wrong plan, You may suspect the following cases:
1- Your histograms which belongs to querying tables are deprecated.
2- Your indexes can not be used because of your faulty query.
3- You can use index hints to force the indexes to be used.
4- You can use SQL Advisor or run TKProf for performance analysis and decide what's wrong or what caused bad performance. Check network, Disk I/O values etc.
If you share your query we can give you more information.
Look like we are not taking same queries in two different conditions.
First case is Simple select over dblink & Second case is "insert as select over dblink".
can you please share two queries & execution plans here as You may have them handy. If its not possible to past queries due to security limitations, please past execution plans.
-Abhi
after many tries, I could create a new DB Plan with Enterprise Manager. now it's running perfect.
Initiation
I have a SQL Server Express 2008 R2 running. There are ten users who read / write permanently to the same tables using Stored Procedures. They do this day and night.
Problem
The performance of the Stored Procedures is getting lower and lower with increasing database size.
A Stored Procedure call needs avg 10ms when the database size is about 200MB.
The same call needs avg 200ms when the database size is about 3GB.
So we have to cleanup the database once a month.
We already did index optimization for some tables with positive effects but the problem still exists.
Finally im not a SQL Server expert. Could you give me some hints to start getting rid of this performance problem?
Download and read Waits and Queues
Download and follow the Troubleshooting SQL Server 2005/2008 Performance and Scalability Flowchart
Read Troubleshooting Performance Problems in SQL Server 2005
The SQL Server Express Edition limitations (1GB memory buffer pool, only one socket CPU used, 10GB database size) are unlikely to be the issue. Application design, bad queries, excessive locking concurrency and poor indexing are more likely to be the problem. The linked articles (specially the first one) include methodology on how to identify the bottleneck(s).
This is MOST likely simple a programmer mistake - sounds like you simply do either have:
Non proper indexing on some tables. THis is NOT optimization - bad indices is like broken HTML for web people, if you have no index then basically you are not using SQL as it is supposed to be used, you should always have proper indexes.
Not enough hardware, such as RAM. yes, it can manage a 10gb database, but if your hot set (the suff accessed all the time) is 2gb and you have only 1gb it WILL hit disc more often than it needs.
Slow discs, particularly a express problem because most people do not bother to get a proper disc layout. THen they run a sQL database againnst a slow 200 IOPS end user disc where - depending on need - a SQL database wants MANY spindles or an SSD (typical SSD these days has 40.000 IOPS).
That is it at the end - plus possibly really bad SQL. Typical filter error: somefomula(field) LIKE value, which means "forget your index, please, make a table scan and calculate someformula(field) before checking".
First, SQL Server Express is not the best edition to your requierement. Get a Developer's Edition to test it. Its exactly like the Enterprise but free if you dont use on "production".
About the performance, there are so many things involved here, and you can improve it using, since indexes until partitioning. We need more info to provide help
Before Optimizing your SQL queries, you need to find the hotspot of the queries. Usually you can use SQL Profiler to do this on SQL Server. For Express edition, there's no such tool. But you can walk around by using a few queries:
Return all renct query:
SELECT *
FROM sys.dm_exec_query_stats order by total_worker_time DESC;
Return only top time consuming queries:
SELECT total_worker_time, execution_count, last_worker_time, dest.TEXT
FROM sys.dm_exec_query_stats AS deqs
CROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) AS dest
ORDER BY total_worker_time DESC;
Now you should know which query needs to be optimized.
May be poor indexes,Poor design of database, may not apply normalization,unwanted column indexes,poor queries which take much time to execute.
SQLExpress is built for testing purposes and the performance is directly limited by Microsoft, If you use it in a production environment you may want to get a license for SQL Server.
Have a look here SQL Express for production?
I have a simple need: I need to see how many queries a vendor's application is running against our Oracle (11g r2) database. Is there a way to query a system table (e.g. v$...) to see this information?
Thanks!
Have you looked at V$SQLSTATS? I'm not sure how you're going to differentiate the vendor-specific SQL from other SQL that might have run, but this could give you a feel for what has been running.
If all you're interested in is what's happening now, then perhaps V$SQLAREA is what you want.
v$SQL has some stats and can tell you module and action (see dbms_application_info)
v$SQLSTATS has more statistics but less ability to determine application source.
DBAs can be overly protective and take the easy way out banning access to really useful stuff.
See if a view (that masks sensitive info) can be created and granted to you so you have access to the really useful stuff while not to the sensitive stuff.
Is it ok to use that views in production? I mean if queries to dictionary is intended to be frequently called or it is designed just for very rare usage with tools like sql navigator, sql developer etc.
It depends on your definition of "frequently", the size of those objects in your database, and why you need to query them.
In general, it's fine to query data dictionary tables on a regular basis in production-- tons of database monitoring tools, for example, will regularly query a bunch of data dictionary tables to gather performance data. At the same time, though, you can easily configure most of these tools to put a tremendous load on your database by gathering too much data too frequently so your performance monitoring tool becomes the source of performance problems. Normally, you can just dial back the amount of data getting captured and the frequency at which it is captured to get 99% of the monitoring benefit without creating a bunch of issues.
I'm not sure why any tool would frequently need to query user_tables-- since tables aren't getting created or destroyed at runtime in a proper system, there aren't too many reasons why you'd really need to query that particular view all that frequently.
I’ve been tasked with optimizing a rather nasty stored procedure in a legacy system. It’s a database dedicated to search, and a new copy is being generate every day, with a lot of complex joins being de-normalized. No writes are being performed, only SELECTs, so I figured some easy improvements could be made by making the whole database read-only and changing the recovery model to “Simple”.
Much to my surprise, this didn’t help – at all! The stored procedure still takes the same amount of time of complete. If fact, I’m so surprised that I figured I did it wrong!
My questions:
Do I need to do anything other than setting “Database read-only” to “true”?
Am I wrong to expect significant performance improvement by making the database read-only?
Same for the recovery model: Shouldn’t “Simple” have some noticeable impact?
Are there other similar database-wide configurations that can improve performance in this scenario?
The stored procedure is huge, with temporary tables, 40+ tables joined in 20+ queries. But I’d like to optimize the database itself before I edit this proc.
Since no writes are performed by your SP, there is no reason to expect noticable performance improvement from changing recovery model and read-write mode.
As others mentioned, you should look into the query plan and optimize your queries.
Another hint: indexes in the database might get fragmented while the database is filled up. Since the data is not going to be modified any more, it might help to rebuild all the indexes with fillfactor 100 - this might help to get rid of fragmentation and to compact data.
Call this for each table in the database: ALTER INDEX ALL ON table_name REBUILD WITH (FILLFACTOR = 100).
Generally, I won't expect much of performance improvement from this, but it depends on the particular database.
Speaking of query optimization, there are very useful features in SQL Server 2005 and later: Execution Related and Index-Related Dynamic Management Views. In particular, sys.dm_exec_query_stats and missing indexes are of interest.
These give you almost the same information as Tuning Advisor, but using you real-life workload, so you don't need to simulate it and feed to the Advisor.
Have you tried using the Database Engine Tuning Advisor included in SQL Server? It will analyze your query and suggest new indexes that will improve the performance of the query. Some of them will be good, some will be bad (for example, I've seen it suggest adding every column in a table to an index, sometimes like 30 of them!), so I don't follow it blindly. Generally I'll add a few indexes and then retest, to find the suggestions that are the most important. I've used it to optimize many queries that I thought I had properly indexed, only to find I could get a lot more performance out of them.
I had a similar setup, large stored procedures with lots of large temp tables.
Our problem was that the joins with and between the temp tables was very slow.
I recommend that you look at your execution plan and try to add relevant indexes to the temp tables too if you have not already.