I have a SQL query:
ANALYSE TABLE CUST_STAT COMPUTE STATISTICS;
it works well in Oracle, but recently I am switching to use PostgreSQL, I change the SQL to:
ANALYSE CUST_STAT COMPUTE STATISTICS;
I already read the manual section on partitioning, I know the TABLE keywords is not needed in PostgreSQL, but I still getting error for the PARTITION :
ANALYZE CUST_STAT PARTITION CUST_STAT_P201307 ;
Can anyone help?
There is no COMPUTE STATISTICS sub-command for ANALYZE in PostgreSQL.
ANALYZE tablename;
per the manual on ANALYZE.
There is also no PARTITION keyword. PostgreSQL's partitioning is limited and largely manual. See the user manual section on partitioning.
The PostgreSQL manual is quite detailed and pretty good. I suggest reading it rather than trying to apply Oracle experience directly to Pg. They're not the same DB.
On partitioning, this tutorial is a bit old and is targeted at EnterpriseDB, but I think it uses only standard features, and it might help introduce the concepts. I haven't reviewed it in detail.
Another simple step-by-step example is on this blog entry.
Examples are no substitute for understanding though, and this is an area you need to understand, not just follow recipes for. If you don't have time for that I strongly recommend seeking someone who does to help you with your implementation in-depth.
Related
I am trying to get the ddl of a table which is present in other database.
is it possible to get the ddl of table using DB link.
That's kind of a tricky job. I've never had to do it, but - Phil knows how. He shared his code (as he said, tested on 9i and 10g); see if it helps.
Link to his blog: DBMS_METADATA Across Database Links!.
(Yes, I know; some people say that it is better to post code than links, but no - I'm not going to do that because a) code is quite long, b) I don't have Phil's permission to do that. Therefore, link is all you get).
As far as I know this is not possible - or at least not easily. You could make educated guesses based on data in all_tables, or all_tab_cols, but getting storage-level info and other parameters (constraints, indexes, etc.) and reverse-engineering workable DDL commands would be more difficult. get_ddl would be the only way that I know of to be sure. I must also say that I do not find this surprising: being able to generate DDL for remote objects over a database link would be considered a HUGE security vulnerability for most systems.
I'm working on a very large query, in a inherited application. This is a large insert-query, that takes 4 tables with well over a million records. I know, I would also rather have this in SQL-server, but there is no infrastructure at this customer to do this :-)
This query has worked for over a year. However, the source-tables keep on growing, and last week it threw the dreaded 'out of system resources'-error. Bummer...!
I think it is possible to optimize this query. Working in MySQL, I would use the explain-command, to see where optimalisation might occur. Is there a equivalent of this in Access? I cannot seem to find it....
kind regards,
Paul
Probably Jet ShowPlan is closest to what you want. You will have to set a registry key. Then query plan information gets dumped to a text file named SHOWPLAN.OUT. You can read about the details in this article on TechRepublic: Use Microsoft Jet's ShowPlan to write more efficient queries
Also try the Performance Analyzer wizard. You can ask it to examine your query alone, or also ask it to examine table or other queries used by that query.
If you haven't compacted the database recently, see whether that improves performance. Compacting also updates index statistics which allows the engine to make better decisions for the query plan.
When you write rather complex SQL for Oracle, sooner or later you will have to apply the odd execution hint because Oracle can't seem to figure out the "best" execution plan itself.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/hintsref.htm
Now this is certainly not a SQL standard. But still, I'm wondering, are there any other RDBMS that support these kinds of hints, and I really mean hints that are "embedded" in SQL? Are they similar, syntactically (i.e. also placed between the SELECTkeyword and the first selected COLUMN)? Do you know of a general documentation page comparing hints in various RDBMS?
N.B: I'm mostly interested in these RDBMS: Postgres, MySQL, HSQLDB, H2, Derby, SQLite, DB2, Sybase, SQL Server
I know that in db2 the plans are made fixed in some way, not how. In Oracle 11g there are other options besides adding hints to queries. These are SQLProfiles and SQLPlan Baselines, both very powerful. I just finished a performance tuning project where we did not add even a single hint to the code, on the contrary.
You can add Oprimizer Hints to any SQL Server Query
The PLAN clause allows you to define a particular plan to your query in Firebird.
AFAIK, nothing standard nor close to it, but in general, you can do this in a lot of RDBM's, but not all.
I'd also remind you, if you are making some sort of comparison with other DB platforms, that hints in Oracle are entirely non-binding. Which is to say that Oracle is free to disregard your hint if it so chooses.
Hints can be helpfull but I find that I rarely use them anymore - at least not compared to the past when I was working with the older optimizers in earlier Oracle versions. Back then hints were much more of a staple to performance tuning than they are now.
I’ve been tasked with optimizing a rather nasty stored procedure in a legacy system. It’s a database dedicated to search, and a new copy is being generate every day, with a lot of complex joins being de-normalized. No writes are being performed, only SELECTs, so I figured some easy improvements could be made by making the whole database read-only and changing the recovery model to “Simple”.
Much to my surprise, this didn’t help – at all! The stored procedure still takes the same amount of time of complete. If fact, I’m so surprised that I figured I did it wrong!
My questions:
Do I need to do anything other than setting “Database read-only” to “true”?
Am I wrong to expect significant performance improvement by making the database read-only?
Same for the recovery model: Shouldn’t “Simple” have some noticeable impact?
Are there other similar database-wide configurations that can improve performance in this scenario?
The stored procedure is huge, with temporary tables, 40+ tables joined in 20+ queries. But I’d like to optimize the database itself before I edit this proc.
Since no writes are performed by your SP, there is no reason to expect noticable performance improvement from changing recovery model and read-write mode.
As others mentioned, you should look into the query plan and optimize your queries.
Another hint: indexes in the database might get fragmented while the database is filled up. Since the data is not going to be modified any more, it might help to rebuild all the indexes with fillfactor 100 - this might help to get rid of fragmentation and to compact data.
Call this for each table in the database: ALTER INDEX ALL ON table_name REBUILD WITH (FILLFACTOR = 100).
Generally, I won't expect much of performance improvement from this, but it depends on the particular database.
Speaking of query optimization, there are very useful features in SQL Server 2005 and later: Execution Related and Index-Related Dynamic Management Views. In particular, sys.dm_exec_query_stats and missing indexes are of interest.
These give you almost the same information as Tuning Advisor, but using you real-life workload, so you don't need to simulate it and feed to the Advisor.
Have you tried using the Database Engine Tuning Advisor included in SQL Server? It will analyze your query and suggest new indexes that will improve the performance of the query. Some of them will be good, some will be bad (for example, I've seen it suggest adding every column in a table to an index, sometimes like 30 of them!), so I don't follow it blindly. Generally I'll add a few indexes and then retest, to find the suggestions that are the most important. I've used it to optimize many queries that I thought I had properly indexed, only to find I could get a lot more performance out of them.
I had a similar setup, large stored procedures with lots of large temp tables.
Our problem was that the joins with and between the temp tables was very slow.
I recommend that you look at your execution plan and try to add relevant indexes to the temp tables too if you have not already.
I want to how to do following general step:
where to find slow SQL
how to debug SQL (including functions)
how to create index properly
when using "update statistics", when should I use HIGH or LOW, and why?
I am going to write a paper about this topic; any help is welcomed.
One place to start is, funnily enough, the Informix Performance Tuning Guide, one section of the Informix 11.70 Information Centre. In particular, it explains most of what you need to know about UPDATE STATISTICS, and also about automatic update statistics.
For question 3, at one level, there isn't much to it - you follow the syntax from the manuals and it works. I'm guessing though that you're more concerned with whether you should create an index on a table; this would in part follow on from questions 1 and 2.
There are a variety of ways to find slow SQL. If you have OAT (OpenAdmin Tool), then it has ways to report the slowest queries. Alternatively, you can look to SET EXPLAIN.
If you have Informix 11.70, then there is a built-in SPL (stored procedure language) debugging facility. For earlier versions, Server Studio and Sentinel has some support. You can also look at the built-in TRACE facility and the related SET DEBUG FILE statement, but they tend to be tricky to interpret, and don't really give you performance information (more a question of correct vs incorrect functioning).