Is it possible to control or limit a user's parallel thread usage in oracle?
Lets say, user dev is executing a SELECT query which is taking 32 parallel threads.
But, irrespective of the hints or table design, i want the query to run in single thread as with /*NOPARALLEL*/ hint. This should happen to whatever DML transaction user dev does with the DB.
Is there any way i could achieve this?
I tried searching for an approach but couldn't reach anywhere.
The only way we can limit a user's consumption of system resources is with a profile. The CREATE PROFILE option provides a couple of options for limiting CPU usage, CPU_PER_SESSION and CPU_PER_CALL , but alas not number of CPUs. Find out more.
I would say that in the sort of environment where we would want to impose resource limits - i.e. a live one - the use of parallel query should either be left to the database through the PARALLEL_AUTOMATIC_TUNING parameter or be locked down by the PARALLEL hint on pre-canned queries only.
Related
I have a few views in my Redshift database. There are a couple of users who perform simple select statements on these views. When a single select query is run, it executes quickly (typically a few seconds) but when multiple select queries(same simple select statement) are run at the same time, all the queries get queued on the Redshift side and take forever to retrieve the results. Not sure why the same query taking a few seconds get queued when triggered in parallel with other select queries.
I am curious to know how can this be resolved or if there is any workaround I need to consider.
There are a number of reasons why this could be happening. First off how many queries in parallel are we talking about? 10, 100, 1000?
The WLM configuration determines the parallelism that a cluster is set up to perform. If the WLM has a queue with only one slot then only one query can run at a time.
Just because a query is simple doesn't mean it is easy. If the tables are configured correctly or if a lot of data is being read (or spilled) a lot of system resources could be needed to perform the query. When many such queries come along these resources get overloaded and things slow down. You may need to evaluate your cluster / table configurations to address any issues.
I could keep guessing possibilities but the better approach would be to provide a query example, WLM configuration and some cluster performance metrics (console) to help narrow things down.
I'd like to save planner cost using plan cache, since OCRA/Legacy optimizer will take dozens of millionseconds.
I think greenplum cache query plan in session level, when session end or other session could not share the analyzed plan. Even more, we can't keep session always on, since gp system will not release resource until TCP connection disconnected.
most major database cache plans after first running, and use that corss connections.
So, is there any switch that turn on query plan cache cross connectors? I can see in a session, client timing statistics not match the "Total time" planner gives?
Postgres can cache the plans as well, which is on a per session basis and once the session is ended, the cached plan is thrown away. This can be tricky to optimize/analyze, but generally of less importance unless the query you are executing is really complex and/or there are a lot of repeated queries.
The documentation explains those in detail pretty well. We can query pg_prepared_statements to see what is cached. Note that it is not available across sessions and visible only to the current session.
When a user starts a session with Greenplum Database and issues a query, the system creates groups or 'gangs' of worker processes on each segment to do the work. After the work is done, the segment worker processes are destroyed except for a cached number which is set by the gp_cached_segworkers_threshold parameter.
A lower setting conserves system resources on the segment hosts, but a higher setting may improve performance for power-users that want to issue many complex queries in a row.
Also see gp_max_local_distributed_cache.
Obviously, the more you cache, the less memory there will be available for other connections and queries. Perhaps not a big deal if you are only hosting a few power users running concurrent queries... but you may need to adjust your gp_vmem_protect_limit accordingly.
For clarification:
Segment resources are released after the gp_vmem_idle_resource_timeout.
Only the master session will remain until the TCP connection is dropped.
i'm struggeling with Performance in oracle. Situation is: Subsystem B has a dblink to master DB A. on System B a query completes after 15 seconds over dblink, db plan uses appropriate indexes.
If same query should fill a table in a stored procedure now, Oracle uses another plan with full scans. whatever i try (hints), i can't get rid of these full scans. that's horrible.
What can i do?
The Oracle Query Optimizer tries 2000 different possibilities and chooses the best one in normal situations. But if you think it choose wrong plan, You may suspect the following cases:
1- Your histograms which belongs to querying tables are deprecated.
2- Your indexes can not be used because of your faulty query.
3- You can use index hints to force the indexes to be used.
4- You can use SQL Advisor or run TKProf for performance analysis and decide what's wrong or what caused bad performance. Check network, Disk I/O values etc.
If you share your query we can give you more information.
Look like we are not taking same queries in two different conditions.
First case is Simple select over dblink & Second case is "insert as select over dblink".
can you please share two queries & execution plans here as You may have them handy. If its not possible to past queries due to security limitations, please past execution plans.
-Abhi
after many tries, I could create a new DB Plan with Enterprise Manager. now it's running perfect.
I would like to ask several questions about Oracle monitoring. Can I use SQL queries to get monitoring data like CPU utilization, RAM utilization, HDD space, table space and etc. Do I need to use privileged user or I can use every Oracle user? If this is not possible what are the alternatives?
From the things you mentioned I think only tablespace usage should be monitored using queries. (Check very good query here: Find out free space on tablespace)
CPU and filesystem should be monitored on OS level, (exception probably being ASM where queries are probably easier to use than ASM console).
If you want to monitor usage of individual sessions then you need privileges to access data dictionaries e.g. v$sql_workarea_active and v$session to get RAM usage for session or query or v$session_wait to get information on waits etc. I don't know what exactly do you wish to monitor, but Oracle documentation is your friend to find information on these dictionaries.
Best solution I know is to use Oracle Enterprise Manager where you can easily monitor all metrics and also create your own.
You can also implement your own metrics monitoring with open source tool like Zabbix (or other of your choice). This is also much cheaper way.
Oracle has performance views where we will get information about Oracle Database Performance.
To answer your question you can query the v$osstat view to get the info about CPU utilization, RAM utilization,HDD etc using SQL queries.Official Oracle documentation on v$osstat
Image of query and result
There are also so many other views especially v$sysstat, v$sqlstat, v$sys_time_model, v$metric where you can dig a lot of performance related information. You can refer the below link to see all the basic metrics that one can query in Oracle
Oracle Metrics List by Don Burleson
I am stress testing a database table
I am looking for any software that can connect to my database and show me some metrics like no of rows in a table, time for inserts , inserts/time, table fragmentation[logical/physical] etc .
It would be great if the reporting tool can do the following:
1] Report in real time or atleast after some interval so that I do not have to wait for test to finish to get first look at the data
2] Ability to do stuff with the data later, like get 99.99 percentile, avg etc.
Is mostly freely available :)
Does anyone have any suggestion of something I can use with my Oracle table. Any pointers would be great.
I can actually write scripts to logg stuff like select count(*) etc .. but then I will have to spend a lot of time parsing and changing the data reporting rather than the tests.
I think some intelligent thing might already be out there ??
Thanks
Edit:
I am looking at a piece of design for
a new architecture
The tests are
"comparison" tests for different
designs and hence as far as I do it
on same hardware and same schema etc
they are comparable to some
granularity.
I want to monitor index
fragmentation, and response times
etc.
If you think there are other
things that can change please let me
know. I am trying to roll back the
table to particular state[basically
truncate] for each new iteration of
the test
First, Oracle has built-in functionality for telling you the number of rows in a table (either use count(*) or search 'gather statistics oracle' for another option).
But "stress testing a table" sounds to me like you're going down the wrong path. Most of the metrics you're mentioning ("time for inserts , inserts/time, table fragmentation[logical/physical] etc") are highly dependent on many factors:
what OS Oracle's running on
how the OS is tuned (i.e. other services running)
how the specific Oracle instance is configured
what underlying storage architecture Oracle's using (and how tablespaces are configured)
what other queries are being executed in the database at the exact same time as your test
But NONE of them would be related to the table design itself.
Now, if you're wondering if your normalized (or de-normalized) table schema is hurting your application, that's another matter. As is performance being degraded by improper/unneeded/missing indexes, triggers, or a host of other problems.
But if you really want an app that will give you real-time monitoring, check out Quest Software's Spotlight on Oracle. But it's definitely not free.
Just to add to the other comments, I believe what you really want is to stress test the queries you're running and not the table. The table is just a bunch of data blocks on a disk and the query is what will make the difference in performance as far as development is concerned. That will tell you if you need different indexes or need to redesign the query.
On the other hand, if you're looking at it as a DBA or system administrator, you're probably more interested in OS level statistics especially disk latency, memory paging, and CPU utilization.
All this is available in the enterprise manager which is my primary tuning tool for development and DBA. If you don't have that, read up on using sql_trace to profile your queries and your OS specific documentation on how to get those stats.