Index in sqlite causing me trouble and slow requesting - performance

Hello I have a table with +800'000 rows in sqlite.
I've indexes on each fields I'm used to search. But my request rate is SLOW:
SELECT "links".* FROM "links"
WHERE "links"."from_id_admin" = "XXXX"
AND "links"."from_type" = "Section"
ORDER BY category_rank DESC, rank DESC
it took me 800ms. (return only one row, all the time is wasted on index lookup)
I investigated further with "EXPLAIN QUERY PLAN" and here is the result:
"SEARCH TABLE links USING INDEX index_links_on_from_type (from_type=?)"
"USE TEMP B-TREE FOR ORDER BY"
Weirdly, Sqlite is using only the from_type index. The problem is there's not so much discrimination on this index (there's 4 or 5 differents values).
If I remove the clause WHERE enough, my request is fast as expected (2ms):
SELECT "links".*
FROM "links"
WHERE "links"."from_id_admin" = "XXXXX"
ORDER BY category_rank DESC, rank DESC
Yeah. Less discrimination means 400x speed improvement. So my question is:
Is that normal behavior?
How can I avoid it?
Can I force the search pattern to lookup to the proper index?
Thanks for your answers ;-)
Yacine.

Ok, finally I found it:
My SQLite database was populated with large amount of data (2Gb) then I never called "ANALYZE" to check the datas and optimize the index use.
So after big change in your database, always use:
ANALYZE
Took one second and half and then everything works properly!
Good to know I guess ;-)

Related

Filter settings in Oracle Views

I have 2 views like this (simplified):
CREATE VIEW BASE_VIEW AS
(
-- Simplified version, view actually does a lot more.
SELECT * FROM MYTABLE;
);
CREATE VIEW OUTER_VIEW AS
(
-- The where clause here makes this view return half the rows as the above BASE_VIEW
SELECT * FROM BASE_VIEW where SomeField = 'something'
);
My question is, shouldn't OUTER_VIEW execute in roughly half the time as the BASE_VIEW? I don't see this behavior. It almost takes the full time as it takes to execute BASE_VIEW.
Since Oracle compiles the referenced views into your outer view, I thought it would be intelligent enough to optimize the query based on the outer view's where clase. Should it not?
EDIT: In fact the "base view" query with the where clause from the "outer view" is taking half the time.
Imagine you have an entire room full of different kinds of fruit scattered all over the place. The view BASE_VIEW is like saying, "Retrieve all fruit in this room."
The OUTER_VIEW is like saying, "Retrieve all oranges from this room." If the fruit is not ordered in some way, you will spend roughly the same amount of time as the BASE_VIEW to search for all the oranges.
Now imagine you have all the fruit separated in baskets. Grabbing all the oranges becomes easy because you know exactly where they are.
Adding an index on SomeField is what orders the data so that it doesn't have to search through the entire room of fruit. There are some restrictions on when an index could help or not. Based on the amount of rows in your table, the oracle optimizer might decide that it is faster to just grab all the rows anyway. I suggest you read through this guide to determine the proper way to index tables: Use The Index, Luke. It helped me a ton when starting out indexing.
If SomeField does not directly come from MYTABLE and is just a created field in BASE_VIEW, then it might help to index the field that creates SomeField. It is hard for us to know without seeing the data.

SQLite - Exploiting Sorted Indexes

This is probably simple, but I can't find the answer.
I'm trying to minimise the overhead of selecting records using ORDER BY
My understanding is that in...
SELECT gorilla, chimp FROM apes ORDER BY bananas LIMIT 10;
...the full set of matching records is retrieved so that that the ORDER BY can be actioned, even if I only want the top ten records. This makes sense.
Trying to eliminate that overhead, I looked at the possibility of storing the records in a pre-defined order, but that would only work until insertions/deletions took place, upon which I would have to re-build the table. Not viable.
I found an option in SQLite (I assume it also exists in other SQLs) to create a sorted index (https://www.sqlite.org/lang_createindex.html)...
CREATE INDEX index_name ON apes (bananas DESC);
...which I ASSUME to mean that the index (not the table) is sorted in descending order and will remain so after updates .
My question is - how do I exploit this? The SQLite documentation is a bit meh in this regard. Is there some kind of "SELECT FROM index" or equivalent? Or does the fact that a sorted index exists on a column mean that any results from querying that column will be returned in the order of the index rather than the order of the column?
Or am I missing something entirely?
I'm working with SQLite3, queried by PHP 7.1
ORDER BY with LIMIT is a little bit more efficient than a plain ORDER BY because only the first few rows need to be completely sorted.
Anyway, for a single-column index, the sort order (ASC or DESC) is pointless because SQLite can step through an index either forwards or backwards.
Indexes are used automatically when SQLite estimates that they would be useful.
To check what actually happens, run EXPLAIN QUERY PLAN (or set .eqp on in the sqlite3 shell).

Adding Index To A Column Having Flag Values

I am a novice in tuning oracle queries thus need help.
If I have a sql query like:
select a.ID,a.name.....
from a,b,c
where a.id=b.id
and ....
and b.flag='Y';
then will adding index to the FLAG column of table b help to tune the query by any means? The FLAG column has only 2 values Y and N
With a standard btree index, the SQL engine can find the row or rows in the index for the specified value quickly due to its binary structure, then use the physical address (the rowid) stored in the index to access the desired row in a second hop. It's like looking in the index of a book to find the page number. So that is:
Go to index with the key value you want to look up.
The index tells you the physical address in the table.
Go straight to that physical address.
That is nice and quick for something like a unique customer ID. It's still OK for something nonunique, like a customer ID in a table of orders, although the database has to go through the index entries and for each one go to the indicated address. That can still be faster than slogging through the entire table from top to bottom.
But for a column with only two distinct values, you can see that it is going to be more work going through all of the index entries for 'Y' for example, and for each one going to the indicated location in the table, than it would be to just forget the index and scan the whole table in one shot.
That's unless the values are unevenly distributed. If there are a million Y rows and ten N rows then an index will help you find those N rows fast but be no use for Y.
Adding an index to a column with only 2 values normally isn't very useful, because Oracle might just as well do a full table scan.
From your query it looks like it would be more useful to have an index on id, because that would help with the join a.id=b.id.
If you really want to get into tuning then learn to use "explain plan", as that will give you some indication of how much work Oracle needs to do for a query. Add (or remove) an index, then rerun the explain plan.

Speeding up a postgres query (which works on 2 tables)

I am doing, in postgresql, something like this:
select A.first,
count(B.second) as count,
array_agg(A.second) as second,
array_agg(A.third) as third,
array_agg(B.kids) as kids
from A join B on A.first=B.second
group by A.first;
And it's taking forever (also because the tables are pretty big). Limiting the output to 10 row and looking with explain analyze told me there's a nested loop which is huge and takes most of the time.
Is there any way in which I can write this query (which I'll then use in CREATE TABLE AS to create a new table) to speed it up, while conserving the same output, which is what I want?
Thanks!
Ensure the column bring used as a foreign key is indexed:
create index b_second on b(second);
Without such an index, every row of a would cause a table scan of b, which would make your query crawl.

Improve SQL Server 2005 Query Performance

I have a course search engine and when I try to do a search, it takes too long to show search results. You can try to do a search here
http://76.12.87.164/cpd/testperformance.cfm
At that page you can also see the database tables and indexes, if any.
I'm not using Stored Procedures - the queries are inline using Coldfusion.
I think I need to create some indexes but I'm not sure what kind (clustered, non-clustered) and on what columns.
Thanks
You need to create indexes on columns that appear in your WHERE clauses. There are a few exceptions to that rule:
If the column only has one or two unique values (the canonical example of this is "gender" - with only "Male" and "Female" the possible values, there is no point to an index here). Generally, you want an index that will be able to restrict the rows that need to be processed by a significant number (for example, an index that only reduces the search space by 50% is not worth it, but one that reduces it by 99% is).
If you are search for x LIKE '%something' then there is no point for an index. If you think of an index as specifying a particular order for rows, then sorting by x if you're searching for "%something" is useless: you're going to have to scan all rows anyway.
So let's take a look at the case where you're searching for "keyword 'accounting'". According to your result page, the SQL that this generates is:
SELECT
*
FROM (
SELECT TOP 10
ROW_NUMBER() OVER (ORDER BY sq.name) AS Row,
sq.*
FROM (
SELECT
c.*,
p.providername,
p.school,
p.website,
p.type
FROM
cpd_COURSES c, cpd_PROVIDERS p
WHERE
c.providerid = p.providerid AND
c.activatedYN = 'Y' AND
(
c.name like '%accounting%' OR
c.title like '%accounting%' OR
c.keywords like '%accounting%'
)
) sq
) AS temp
WHERE
Row >= 1 AND Row <= 10
In this case, I will assume that cpd_COURSES.providerid is a foreign key to cpd_PROVIDERS.providerid in which case you don't need an index, because it'll already have one.
Additionally, the activatedYN column is a T/F column and (according to my rule above about restricting the possible values by only 50%) a T/F column should not be indexed, either.
Finally, because searching with a x LIKE '%accounting%' query, you don't need an index on name, title or keywords either - because it would never be used.
So the main thing you need to do in this case is make sure that cpd_COURSES.providerid actually is a foreign key for cpd_PROVIDERS.providerid.
SQL Server Specific
Because you're using SQL Server, the Management Studio has a number of tools to help you decide where you need to put indexes. If you use the "Index Tuning Wizard" it is actually usually pretty good at tell you what will give you the good performance improvements. You just cut'n'paste your query into it, and it'll come back with recommendations for indexes to add.
You still need to be a little bit careful with the indexes that you add, because the more indexes you have, the slower INSERTs and UPDATEs will be. So sometimes you'll need to consolidate indexes, or just ignore them altogether if they don't give enough of a performance benefit. Some judgement is required.
Is this the real live database data? 52,000 records is a very small table, relatively speaking, for what SQL 2005 can deal with.
I wonder how much RAM is allocated to the SQL server, or what sort of disk the database is on. An IDE or even SATA hard disk can't give the same performance as a 15K RPM SAS disk, and it would be nice if there was sufficient RAM to cache the bulk of the frequently accessed data.
Having said all that, I feel the " (c.name like '%accounting%' OR c.title like '%accounting%' OR c.keywords like '%accounting%') " clause is problematic.
Could you create a separate Course_Keywords table, with two columns "courseid" and "keyword" (varchar(24) should be sufficient for the longest keyword?), with a composite clustered index on courseid+keyword
Then, to make the UI even more friendly, use AJAX to apply keyword validation & auto-completion when people type words into the keywords input field. This gives you the behind-the-scenes benefit of having an exact keyword to search for, removing the need for pattern-matching with the LIKE operator...
Using CF9? Try using Solr full text search instead of %xxx%?
You'll want to create indexes on the fields you search by. An index is a secondary list of your records presorted by the indexed fields.
Think of an old-fashioned printed yellow pages - if you want to look up a person by their last name, the phonebook is already sorted in that way - Last Name is the clustered index field. If you wanted to find phone numbers for people named Jennifer or the person with the phone number 867-5309, you'd have to search through every entry and it would take a long time. If there were an index in the back with all the phone numbers or first names listed in order along with the page in the phonebook that the person is listed, it would be a lot faster. These would be the unclustered indexes.
I would try changing your IN statements to an EXISTS query to see if you get better performance on the Zip code lookup. My experience is that IN statements work great for small lists but the larger they get, you get better performance out of EXISTS as the query engine will stop searching for a specific value the first instance it runs into.
<CFIF zipcodes is not "">
EXISTS (
SELECT zipcode
FROM cpd_CODES_ZIPCODES
WHERE zipcode = p.zipcode
AND 3963 * (ACOS((SIN(#getzipcodeinfo.latitude#/57.2958) * SIN(latitude/57.2958)) +
(COS(#getzipcodeinfo.latitude#/57.2958) * COS(latitude/57.2958) *
COS(longitude/57.2958 - #getzipcodeinfo.longitude#/57.2958)))) <= #radius#
)
</CFIF>

Resources