best way of oracle database partitions? - oracle

I have one table of 50k keywords and I am providing the auto-complete feature for these keywords based on count mechanism. But still getting the keywords takes time..
In what way would the database partitions have to be done for fast retrieving....
help me plz.....

A table with 50k rows is very small. There should be no need (and benefit) to partition it.
You need to look at the query execution plan and your algorithm in general. Maybe you just need an index. Or an in-memory cache.

some thoughts:
50k keywords is not that big a table, partitions won't help, a smart index might.
you might fare best by loading a suitable data structure into memory first
if the data is in the DB your auto-complete will likely be slow and unresponsive, as every keypress results in communications with the DB.

Perhaps old table statistics, optimizer can choose wrong Plan.
Try from user with DBA role
exec dbms_stats.gather_table_stats (ownname => 'YOUR_OWNER', tabname => 'YOUR_TABLE');
alter system flush shared_pool;
And test time of getting the keywords again.
P.S. The statistics should be gathered regularly.

Related

Advantages of temporary tables in Oracle

I've tried to figure out which performance impacts the use of temporary tables has on an Oracle database. We want to use these tables in our ETL process to save temporary results. At this time we are using physical tables for this purpose and truncating this tables at the beginning of the ETL process. I know that the truncate process is very expensive and therefore I thought if it would be better to use temporary tables instead.
Have anyone of you experiences if there is a performance boost by using temporary tables in this scenario?
There were only some answers on this question regarding to the SQL Server like in this question. But I don't know if these recommendations also fit for the Oracle db.
It would be nice if anyone could list the advantages and disadvanteges of this feature and also point out in which scenarios this feature could be applicable.
Thanks in advance.
First of all: truncate is not expensive, a delete with no condition is very expensive.
Second: do your temporary table have indexes? What about external keys?
That could affect performance.
The temporary table works more or less like Sql Server (of course the syntax is different, like global temporary table), and both are just table.
You won't get any performance gain with temporary tables against normal table, they are just the same: they have a definition on DB, can have indexes, and are logged.
The only difference is that temporary table are exclusive to your session (except for global table) and that means if multiple scripts from multiple sessions refer to the same table, every one is reading/writing a different table and they cannot locking each other (in this case you could gain performance, but I think it's rarely the case).

Postgres tsvector_update_trigger sometimes takes minutes

I have configured free text search on a table in my postgres database. Pretty simple stuff, with firstname, lastname and email. This works well and is fast.
I do however sometimes experience looong delays when inserting a new entry into the table, where the insert keeps running for minutes and also generates huge WAL files. (We use the WAL files for replication).
Is there anything I need to be aware of with my free text index? Like Postgres maybe randomly restructuring it for performance reasons? My index is currently around 400 MB big.
Thanks in advance!
Christian
Given the size of the WAL files, I suspect you are right that it is an index update/rebalancing that is causing the issue. However I have to wonder what else is going on.
I would recommend against storing tsvectors in separate columns. A better way is to run an index on to_tsvector()'s output. You can have multiple indexes for multiple languages if you need. So instead of a trigger that takes, say, a field called description and stores the tsvector in desc_tsvector, I would recommend just doing:
CREATE INDEX mytable_description_tsvector_idx ON mytable(to_tsvector(description));
Now, if you need a consistent search interface across a whole table, there are more elegant ways of doing this using "table methods."
In general the functional index approach has fewer issues associated with it than anything else.
Now a second thing you should be aware of are partial indexes. If you need to, you can index only records of interest. For example, if most of my queries only check the last year, I can:
CREATE INDEX mytable_description_tsvector_idx ON mytable(to_tsvector(description))
WHERE created_at > now() - '1 year'::interval;

(TSQL) INSERT doubling time of the query

I have a quite complex multi-join TSQL SELECT query that runs for about 8 seconds and returns about 300K records. Which is currently acceptable. But I need to reuse results of that query several times later, so I am inserting results of the query into a temp table. Table is created in advance with columns that match output of SELECT query. But as soon as I do INSERT INTO ... SELECT - execution time more than doubles to over 20 seconds! Execution plans shows that 46% of the query cost goes to "Table Insert" and 38% to Table Spool (Eager Spool).
Any idea why this is happening and how to speed it up?
Thanks!
The "Why" of it hard to say, we'd need a lot more information. (though my SWAG would be that it has to do with logging...)
However, the solution, 9 times out of 10 is to use SELECT INTO to make your temp table.
I would start by looking at standard tuning itmes. Is disk performing? Are there sufficient resources (IOs, RAM, CPU, etc)? Is there a bottleneck in the RDBMS? Does sound like the issue but what is happening with locking? Does other code give similar results? Is other code performant?
A few things I can suggest based on the information you have provided. If you don't care about dirty reads, you could always change the transaction isolation level (if you're using MS T-SQL)
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
select ...
This may speed things up on your initial query as locks will not need to be done on the data you are querying from. If you're not using SQL server, do a google search for how to do the same thing with the technology you are using.
For the insert portion, you said you are inserting into a temp table. Does your database support adding primary keys or indexes on your temp table? If it does, have a dummy column in there that is an indexed column. Also, have you tried to use a regular database table with this? Depending on your set up, it is possible that using that will speed up your insert times.

Doesn't Read-Only make a difference for SQL Server?

I’ve been tasked with optimizing a rather nasty stored procedure in a legacy system. It’s a database dedicated to search, and a new copy is being generate every day, with a lot of complex joins being de-normalized. No writes are being performed, only SELECTs, so I figured some easy improvements could be made by making the whole database read-only and changing the recovery model to “Simple”.
Much to my surprise, this didn’t help – at all! The stored procedure still takes the same amount of time of complete. If fact, I’m so surprised that I figured I did it wrong!
My questions:
Do I need to do anything other than setting “Database read-only” to “true”?
Am I wrong to expect significant performance improvement by making the database read-only?
Same for the recovery model: Shouldn’t “Simple” have some noticeable impact?
Are there other similar database-wide configurations that can improve performance in this scenario?
The stored procedure is huge, with temporary tables, 40+ tables joined in 20+ queries. But I’d like to optimize the database itself before I edit this proc.
Since no writes are performed by your SP, there is no reason to expect noticable performance improvement from changing recovery model and read-write mode.
As others mentioned, you should look into the query plan and optimize your queries.
Another hint: indexes in the database might get fragmented while the database is filled up. Since the data is not going to be modified any more, it might help to rebuild all the indexes with fillfactor 100 - this might help to get rid of fragmentation and to compact data.
Call this for each table in the database: ALTER INDEX ALL ON table_name REBUILD WITH (FILLFACTOR = 100).
Generally, I won't expect much of performance improvement from this, but it depends on the particular database.
Speaking of query optimization, there are very useful features in SQL Server 2005 and later: Execution Related and Index-Related Dynamic Management Views. In particular, sys.dm_exec_query_stats and missing indexes are of interest.
These give you almost the same information as Tuning Advisor, but using you real-life workload, so you don't need to simulate it and feed to the Advisor.
Have you tried using the Database Engine Tuning Advisor included in SQL Server? It will analyze your query and suggest new indexes that will improve the performance of the query. Some of them will be good, some will be bad (for example, I've seen it suggest adding every column in a table to an index, sometimes like 30 of them!), so I don't follow it blindly. Generally I'll add a few indexes and then retest, to find the suggestions that are the most important. I've used it to optimize many queries that I thought I had properly indexed, only to find I could get a lot more performance out of them.
I had a similar setup, large stored procedures with lots of large temp tables.
Our problem was that the joins with and between the temp tables was very slow.
I recommend that you look at your execution plan and try to add relevant indexes to the temp tables too if you have not already.

Slow Performance on Sql Express after inserting big chunks of data

We have noticed that our queries are running slower on databases that had big chunks of data added (bulk insert) when compared with databases that had the data added on record per record basis, but with similar amounts of data.
We use Sql 2005 Express and we tried reindexing all indexes without any better results.
Do you know of some kind of structural problem on the database that can be caused by inserting data in big chunks instead of one by one?
Thanks
One tip I've seen is to turn off Auto-create stats and Auto-update stats before doing the bulk insert:
ALTER DATABASE databasename SET AUTO_CREATE_STATISTICS OFF WITH NO_WAIT
ALTER DATABASE databasename SET AUTO_UPDATE_STATISTICS OFF WITH NO_WAIT
Afterwards, manually creating statistics by one of 2 methods:
--generate statistics quickly using a sample of data from the table
exec sp_createstats
or
--generate statistics using a full scan of the table
exec sp_createstats #fullscan = 'fullscan'
You should probably also turn Auto-create and Auto-update stats back on when you're done.
Another option is to check and defrag the indexes after a bulk insert. Check out Pinal Dave's blog post.
Probably SQL Server allocated new disk space in many small chunks. When doing big transactions, it's better to pre-allocate much space in both the data and log files.
That's an interesting question.
I would have guessed that Express and non-Express have the same storage layout, so when you're Googling for other people with similar problems, don't restrict yourself to Googling for problems in the Express version. On the other hand though, bulk insert is a common-place operation and performance is important, so I wouldn't consider it likely that this is a previously-undetected bug.
One obvious question: which is the clustered index? Is the clustered index also the primary key? Is the primary key unassigned when you insert, and therefore initialized by the database? If so then maybe there's a difference (between the two insert methods) in the pattern or sequence of successive values assigned by the database, which affects the way in which the data is clustered, which then affects performance.
Something else: as well as indexes, people say that SQL uses statistics (which it created as a result of runing previous queries) to optimize its execution plan. I don't know any details of that, but as well as "reindexing all indexes", check the execution plans of your queries in the two test cases to ensure that the plans are identical (and/or check the associated statistics).

Resources