PostgreSQL 9.6 parallelism does not work with create table? - parallel-processing

I have one problem - parallelism in query processing seems to now work on my database when query is enclosed in "CREATE TABLE".
Plain SELECT is processed perfectly in parallel so is very quick. Parallelism is shown also in EXPLAIN output and I can also see background workers in "top" monitoring. So it is much better then on pg 9.5.
But when I use the same SELECT in CREATE TABLE it runs for ages and not in parallel. It is very much slower then on pg 9.5. Parallelism in this case is not shown in EXPLAIN output and there are no background workers running in OS. I tried to set "force_parallel_mode" = ON but without any change.
Is there some magical setting I missed which should be changed from standard setting to use parallelism also in CREATE TABLE? Or is this expected behavior "by design"?
UPDATE: OK, seems to be "by design" - Parallel queries on CTE for writing operations in PostgreSQL
But in that case CREATE TABLE as SELECT seems to give very bad performance on 9.6 with comparison with 9.5...
UPDATE: Looks like PostgreSQL 9.6 requires higher setting of work_mem than 9.5 - I fiddled with query plans of badly running queries and found that just by changes of work_mem setting in session I can get costs from "Sort (cost=807568233.23..807568267.50 rows=13709 width=105)" for work_mem=32MB down to "Sort (cost=151127.62..151161.89 rows=13709 width=105)" for work_mem=64MB...

Related

Oracle not uses best dbplan

i'm struggeling with Performance in oracle. Situation is: Subsystem B has a dblink to master DB A. on System B a query completes after 15 seconds over dblink, db plan uses appropriate indexes.
If same query should fill a table in a stored procedure now, Oracle uses another plan with full scans. whatever i try (hints), i can't get rid of these full scans. that's horrible.
What can i do?
The Oracle Query Optimizer tries 2000 different possibilities and chooses the best one in normal situations. But if you think it choose wrong plan, You may suspect the following cases:
1- Your histograms which belongs to querying tables are deprecated.
2- Your indexes can not be used because of your faulty query.
3- You can use index hints to force the indexes to be used.
4- You can use SQL Advisor or run TKProf for performance analysis and decide what's wrong or what caused bad performance. Check network, Disk I/O values etc.
If you share your query we can give you more information.
Look like we are not taking same queries in two different conditions.
First case is Simple select over dblink & Second case is "insert as select over dblink".
can you please share two queries & execution plans here as You may have them handy. If its not possible to past queries due to security limitations, please past execution plans.
-Abhi
after many tries, I could create a new DB Plan with Enterprise Manager. now it's running perfect.

Why SQL query could take each time more time execute on subsequent executions?

I run a complex query against Oracle DB 11G based eBS R12 schema:
For first run it takes 4 seconds. If I run it again, it takes 9, next 30 etc.
If I add "and 1=1" it takes 4 seconds again, then 9, the 30 and so on.
Quick workaraound is that we added a random generated "and sometstring = somestring" and now the results are always in 4 second.
I have never encoutered a query that would behave this way (it should be opposite, or no siginificat change between executions). We tested it on 2 copies of same DB, same behaviour.
How to debug it? And what internal mechanics could be getting confused?
UPDATE 1:
EXPLAIN PLAN FOR
(my query);
SELECT * FROM table(DBMS_XPLAN.DISPLAY);
Is exactly the same before first run that it is for subsequent ones. see http://pastebin.com/dMsXmhtG
Check the DBMS_XPLAN.DISPLAY_CURSOR. The reason could be cardinality feedback or other adaptive techniques Oracle uses. You should see multiple child cursors related to SQL_ID of your query and you can compare their plans.
Has your query bound variables and columns used for filtering histograms? This could be another reason.
Sounds like you might be suffering from adaptive cursor sharing or cardinality feedback. Here is an article showing how to turn them off - perhaps you could do that and see if the issue stops happening, as well as using #OldProgrammer's suggestion of tracing what is happening.
If one of these is found to be the problem, you can then take the necessary steps to ensure that the root cause (eg. incorrect statistics, unnecessary histograms, etc.) is corrected.

Oracle Bind Query is very slow

I have an Oracle bind query that is extremely slow (about 2 minutes) when it executes in my C# program but runs very quickly in SQL Developer. It has two parameters that hit the tables index:
select t.Field1, t.Field2
from theTable t
where t.key1=:key1
and t.key2=:key2
Also, if I remove the bind variables and create dynamic sql, it runs just like it does in SQL Developer.
Any suggestion?
BTW, I'm using ODP.
If you are replacing the bind variables with static varibles in sql developer, then you're not really running the same test. Make sure you use the bind varibles, and if it's also slow you're just getting bit by a bad cached execution plan. Updating the stats on that table should resolve it.
However if you are actually using bind variables in sql developers then keep reading. The TLDR version is that parameters that ODP.net run under sometimes cause a slightly more pessimistic approach. Start with updating the stats, but have your dba capture the execution plan under both scenarios and compare to confirm.
I'm reposting my answer from here: https://stackoverflow.com/a/14712992/852208
I considered flagging yours as a duplicate but your title is a little more concise since it identifies the query does run fast in sql developer. I'll welcome advice on handling in another manner.
Adding the following to your config will send odp.net tracing info to a log file:
This will probably only be helpful if you can find a large gap in time. Chances are rows are actually coming in, just at a slower pace.
Try adding "enlist=false" to your connection string. I don't consider this a solution since it effecitively disables distributed transactions but it should help you isolate the issue. You can get a little bit more information from an oracle forumns post:
From an ODP perspective, all we can really point out is that the
behavior occurs when OCI_ATR_EXTERNAL_NAME and OCI_ATR_INTERNAL_NAME
are set on the underlying OCI connection (which is what happens when
distrib tx support is enabled).
I'd guess what you're not seeing is that the execution plan is actually different (meaning the actual performance hit is actually occuring on the server) between the odp.net call and the sql developer call. Have your dba trace the connection and obtain execution plans from both the odp.net call and the call straight from SQL Developer (or with the enlist=false parameter).
If you confirm different execution plans or if you want to take a preemptive shot in the dark, update the statistics on the related tables. In my case this corrected the issue, indicating that execution plan generation doesn't really follow different rules for the different types of connections but that the cost analysis is just slighly more pesimistic when a distributed transaction might be involved. Query hints to force an execution plan are also an option but only as a last resort.
Finally, it could be a network issue. If your odp.net install is using a fresh oracle home (which I would expect unless you did some post-install configuring) then the tnsnames.ora could be different. Host names in tnsnams might not be fully qualified, creating more delays resolving the server. I'd only expect the first attempt (and not subsequent attempts) to be slow in this case so I don't think it's the issue but I thought it should be mentioned.
Are the parameters bound to the correct data type in C#? Are the columns key1 and key2 numbers, but the parameters :key1 and :key2 are strings? If so, the query may return the correct results but will require implicit conversion. That implicit conversion is like using a function to_char(key1), which prevents an index from being used.
Please also check what is the number of rows returned by the query. If the number is big then possibly C# is fetching all rows and the other tool first pocket only. Fetching all rows may require many more disk reads in that case, which is slower. To check this try to run in SQL Developer:
SELECT COUNT(*) FROM (
select t.Field1, t.Field2
from theTable t
where t.key1=:key1
and t.key2=:key2
)
The above query should fetch the maximum number of database blocks.
Nice tool in such cases is tkprof utility which shows SQL execution plan which may be different in cases above (however it should not be).
It is also possible that you have accidentally connected to different databases. In such cases it is nice to compare results of queries.
Since you are raising "Bind is slow" I assume you have checked the SQL without binds and it was fast. In 99% using binds makes things better. Please check if query with constants will run fast. If yes than problem may be implicit conversion of key1 or key2 column (ex. t.key1 is a number and :key1 is a string).

PreparedStatement and ORA-01652( unable to extend temp segment)

I have a vey huge query. It is rather large, so i will not post it here(it has 6 levels of nested queries with ordering and grouping). Query has 2 parameters that are passed to it via PreparedStatement.setString(index, value). When I execute my query through SQL Developer(replacing query parameters to actual values before it by hand) the query runs about 10 seconds and return approximately 15000 rows. But when I try to run it through java program using PreparedStament with varibales it fails with ORA-01652(unable to extend temp segment). I have tried to use simple Statement from java program - it works fine. Also when I use preparedStatement without variables(don't use setString(), but specify parameters by hand) it works fine too.
So, I suspect that problem is in PreparedStatemnt parameters.
How does the mechanism of that parameters work? Why simple statement works fine but prepared one fails?
You're probably running into issues with bind variable peeking.
For the same query, the best plan can be significantly different depending on the actual bind variables. In 10g, Oracle builds the execution plan based on the first set of bind variables used. 11g mostly fixed this problem with adaptive cursor sharing, a feature that creates multiple plans for different bind variables.
Here are some ideas for solving this problem:
Use literals This isn't always as bad as people assume. If the good version of your query runs in 10 seconds, the overhead of hard-parsing the query will be negligible. But you may need to be careful to avoid SQL injection.
Force a hard-parse There are a few ways to force Oracle to hard-parse every query. One method is to call DBMS_STATS with NO_INVALIDATE=>FALSE on one of the tables in the query.
Disable bind-variable peeking / hints You can do this by removing the relevant histograms, or using one of the parameters in the link provided by OldProgrammer. This will stabilize your plan, but will not necessarily pick the correct plan. You may also need to use hints to pick the right plan. But then you may not have the right plan for every combination of inputs.
Upgrade to 11g This may not be an option, but this issue is another good reason to start planning an upgrade.

oracle metrics monitoring and reporting in real time

I am stress testing a database table
I am looking for any software that can connect to my database and show me some metrics like no of rows in a table, time for inserts , inserts/time, table fragmentation[logical/physical] etc .
It would be great if the reporting tool can do the following:
1] Report in real time or atleast after some interval so that I do not have to wait for test to finish to get first look at the data
2] Ability to do stuff with the data later, like get 99.99 percentile, avg etc.
Is mostly freely available :)
Does anyone have any suggestion of something I can use with my Oracle table. Any pointers would be great.
I can actually write scripts to logg stuff like select count(*) etc .. but then I will have to spend a lot of time parsing and changing the data reporting rather than the tests.
I think some intelligent thing might already be out there ??
Thanks
Edit:
I am looking at a piece of design for
a new architecture
The tests are
"comparison" tests for different
designs and hence as far as I do it
on same hardware and same schema etc
they are comparable to some
granularity.
I want to monitor index
fragmentation, and response times
etc.
If you think there are other
things that can change please let me
know. I am trying to roll back the
table to particular state[basically
truncate] for each new iteration of
the test
First, Oracle has built-in functionality for telling you the number of rows in a table (either use count(*) or search 'gather statistics oracle' for another option).
But "stress testing a table" sounds to me like you're going down the wrong path. Most of the metrics you're mentioning ("time for inserts , inserts/time, table fragmentation[logical/physical] etc") are highly dependent on many factors:
what OS Oracle's running on
how the OS is tuned (i.e. other services running)
how the specific Oracle instance is configured
what underlying storage architecture Oracle's using (and how tablespaces are configured)
what other queries are being executed in the database at the exact same time as your test
But NONE of them would be related to the table design itself.
Now, if you're wondering if your normalized (or de-normalized) table schema is hurting your application, that's another matter. As is performance being degraded by improper/unneeded/missing indexes, triggers, or a host of other problems.
But if you really want an app that will give you real-time monitoring, check out Quest Software's Spotlight on Oracle. But it's definitely not free.
Just to add to the other comments, I believe what you really want is to stress test the queries you're running and not the table. The table is just a bunch of data blocks on a disk and the query is what will make the difference in performance as far as development is concerned. That will tell you if you need different indexes or need to redesign the query.
On the other hand, if you're looking at it as a DBA or system administrator, you're probably more interested in OS level statistics especially disk latency, memory paging, and CPU utilization.
All this is available in the enterprise manager which is my primary tuning tool for development and DBA. If you don't have that, read up on using sql_trace to profile your queries and your OS specific documentation on how to get those stats.

Resources