How to profile pgSQL functions? - performance

I have a script written in pgSQL that moves records from one table to others. The logic is quite sophisticated: it involves lots of SELECT queries. For readability, the script is divided into many functions calling one another.
I would like to find a bottleneck in the script.
I tried to use pgbadger to find the most time-consuming queries. I enabled logging according to the instructions from https://github.com/dalibo/pgbadger/blob/master/README (POSTGRESQL CONFIGURATION section). But the problem is that pgbadger relies on logs and the logs are on the level of pgSQL functions, not single SELECT statements. So the information I get is that running a given function took X seconds. I would like to see a chart that shows how much time every SELECT query was running. Is there a way to do it without reorganizing the script?

Related

Why is redshift inefficient event for the simplest select queries?

I have a few views in my Redshift database. There are a couple of users who perform simple select statements on these views. When a single select query is run, it executes quickly (typically a few seconds) but when multiple select queries(same simple select statement) are run at the same time, all the queries get queued on the Redshift side and take forever to retrieve the results. Not sure why the same query taking a few seconds get queued when triggered in parallel with other select queries.
I am curious to know how can this be resolved or if there is any workaround I need to consider.
There are a number of reasons why this could be happening. First off how many queries in parallel are we talking about? 10, 100, 1000?
The WLM configuration determines the parallelism that a cluster is set up to perform. If the WLM has a queue with only one slot then only one query can run at a time.
Just because a query is simple doesn't mean it is easy. If the tables are configured correctly or if a lot of data is being read (or spilled) a lot of system resources could be needed to perform the query. When many such queries come along these resources get overloaded and things slow down. You may need to evaluate your cluster / table configurations to address any issues.
I could keep guessing possibilities but the better approach would be to provide a query example, WLM configuration and some cluster performance metrics (console) to help narrow things down.

MonetDB Query Plan

I have a few queries that I am running and I would like to view some sort of query plan for a given query. When I add "explain" before the query, I get a long (~4,000 lines) result that is not possible to interpret.
The MAL plan exposes all parallel activity needed to solve the query. Each line is a relational algebra operator or catalog action.
You might also use PLAN to get an idea of the output of the SQL optimizer.
Each part in the physical execution plan that'll be executed in parallel is repeated the same number of times as the number of cores you have in the result of EXPLAIN. That's why EXPLAIN can sometimes produce a huge MAL plan.
If you just want to have an idea of how are query is handled, you can force MonetDB to generate a sequential MAL plan, then at least, you get rid of the repetitions. For this, you can change the default optimiser pipe line to, e.g., 'sequential_pipe'. This can be done both in a client (it works then only for this client session), or in a server (it works then for the whole server session). For more information: https://www.monetdb.org/Documentation/Cookbooks/SQLrecipes/OptimizerPipelines

Sql huge insert script

I took a backup of a table in the form of insert script using toad for oracle. I could not use that script in toad to perform inserts because of the huge size. Is there a way that i can run the huge script using toad?
1. Reduce network time by running the script on the server. Chances are the vast majority of the time is spent waiting for the network. Normally each INSERT statement is a separate round-trip.
2. Reduce network time by batching the inserts. Wrap a begin and end; around a large number of inserts. A PL/SQL block only requires one round-trip. Note that you probably cannot put the entire script in a single anonymous block as there are parsing limits. You will get DIANA errors with anonymous blocks larger than roughly a few megabytes in size.
3. Run the code indirectly. Maybe just loading the file in Toad is the problem? Run a script that simply calls that script, perhaps something like #my_script.sql?
Without knowing more about Toad or what the script looks like I cannot say for sure if these will work. But I've used these approaches with similar issues, there is usually a way to make simplistic install scripts run more than 10 times faster.
Try running the script in SQLPLUS using '#'
from the View menu, show the Project Manager.
add sql files to the project
select the files, right click and choose Execute

PreparedStatement and ORA-01652( unable to extend temp segment)

I have a vey huge query. It is rather large, so i will not post it here(it has 6 levels of nested queries with ordering and grouping). Query has 2 parameters that are passed to it via PreparedStatement.setString(index, value). When I execute my query through SQL Developer(replacing query parameters to actual values before it by hand) the query runs about 10 seconds and return approximately 15000 rows. But when I try to run it through java program using PreparedStament with varibales it fails with ORA-01652(unable to extend temp segment). I have tried to use simple Statement from java program - it works fine. Also when I use preparedStatement without variables(don't use setString(), but specify parameters by hand) it works fine too.
So, I suspect that problem is in PreparedStatemnt parameters.
How does the mechanism of that parameters work? Why simple statement works fine but prepared one fails?
You're probably running into issues with bind variable peeking.
For the same query, the best plan can be significantly different depending on the actual bind variables. In 10g, Oracle builds the execution plan based on the first set of bind variables used. 11g mostly fixed this problem with adaptive cursor sharing, a feature that creates multiple plans for different bind variables.
Here are some ideas for solving this problem:
Use literals This isn't always as bad as people assume. If the good version of your query runs in 10 seconds, the overhead of hard-parsing the query will be negligible. But you may need to be careful to avoid SQL injection.
Force a hard-parse There are a few ways to force Oracle to hard-parse every query. One method is to call DBMS_STATS with NO_INVALIDATE=>FALSE on one of the tables in the query.
Disable bind-variable peeking / hints You can do this by removing the relevant histograms, or using one of the parameters in the link provided by OldProgrammer. This will stabilize your plan, but will not necessarily pick the correct plan. You may also need to use hints to pick the right plan. But then you may not have the right plan for every combination of inputs.
Upgrade to 11g This may not be an option, but this issue is another good reason to start planning an upgrade.

How can I utilize Oracle bind variables with Delphi's SimpleDataSet?

I have an Oracle 9 database from which my Delphi 2006 application reads data into a TSimpleDataSet using a SQL statement like this one (in reality it is more complex, of course):
select * from myschema.mytable where ID in (1, 2, 4)
My applications starts up and executes this query quite often during the course of the day, each time with different values in the in clause.
My DBAs have notified me that this is creating execessive load on the database server, as the query is re-parsed on every run. They suggested to use bind variables instead of building the SQL statement on the client.
I am familiar with using parameterized queries in Delphi, but from the article linked to above I get the feeling that is not exactly what bind variables are. Also, I would need theses prepared statements to work across different runs of the application.
Is there a way to prepare a statement containing an in clause once in the database and then have it executed with different parameters passed in from a TSimpleDataSet so it won't need to be reparsed every time my application is run?
My answer is not directly related to Delphi, but this problem in general. Your problem is that of the variable-sized in-list. Tom Kyte of Oracle has some recommendations which you can use. Essentially, you are creating too many unique queries, causing the database to do a bunch of hard-parsing. This will spike the CPU consumption (and DBA blood pressures) unnecessarily.
By making your query static, it can get by with a soft-parse or perhaps no parse at all! The DB can then cache the execution plan, the DBAs can deal with a more "stable" SQL, and overall performance should be improved.

Resources