Performance in Elasticsearch - performance

I am now beginning with elasticsearch.
I have two cases of data in a relational database, but in both cases I want to find the records from the first table as quickly as possible.
Case 1: binding tables 1: n (example Invoice - Items of invoice)
Have I been to save the data to the elasticsearch system: all rows from slave or master_id and group all data from slave to single string?
Case 2: binding tables n: 1 (example Invoice - Customer)
Have I been to save the data as in case 1 to independent index or add next column to previous index?
The problem is that sometimes I only need to search for records that contain a specific invoice item, sometimes a specific customer, and sometimes both an invoice item and a customer.
Should I create one index containing all the data, or all 3 variants?
Another problem, is it possible to speed up the search in elasticsearch somehow, when the stored data is eg only EAN (13 digit number) but not plain text?
Thank
Jaroslav

You should denormalize and just use single index for all your data(invoices, items and customer) for the best performance, Elasticsearch although supports joins and parent-child relationship but their performance is no where near to when all the data is part of single index and quick benchmark test on your data will prove it easily.

Related

Adding Index To A Column Having Flag Values

I am a novice in tuning oracle queries thus need help.
If I have a sql query like:
select a.ID,a.name.....
from a,b,c
where a.id=b.id
and ....
and b.flag='Y';
then will adding index to the FLAG column of table b help to tune the query by any means? The FLAG column has only 2 values Y and N
With a standard btree index, the SQL engine can find the row or rows in the index for the specified value quickly due to its binary structure, then use the physical address (the rowid) stored in the index to access the desired row in a second hop. It's like looking in the index of a book to find the page number. So that is:
Go to index with the key value you want to look up.
The index tells you the physical address in the table.
Go straight to that physical address.
That is nice and quick for something like a unique customer ID. It's still OK for something nonunique, like a customer ID in a table of orders, although the database has to go through the index entries and for each one go to the indicated address. That can still be faster than slogging through the entire table from top to bottom.
But for a column with only two distinct values, you can see that it is going to be more work going through all of the index entries for 'Y' for example, and for each one going to the indicated location in the table, than it would be to just forget the index and scan the whole table in one shot.
That's unless the values are unevenly distributed. If there are a million Y rows and ten N rows then an index will help you find those N rows fast but be no use for Y.
Adding an index to a column with only 2 values normally isn't very useful, because Oracle might just as well do a full table scan.
From your query it looks like it would be more useful to have an index on id, because that would help with the join a.id=b.id.
If you really want to get into tuning then learn to use "explain plan", as that will give you some indication of how much work Oracle needs to do for a query. Add (or remove) an index, then rerun the explain plan.

Oracle database help optimizing LIKE searches

I am on Oracle 11g and we have these 3 core tables:
Customer - CUSTOMERID|DOB
CustomerName - CUSTOMERNAMEID|CustomerID|FNAME|LNAME
Address - ADDRESSID|CUSTOMERID|STREET|CITY|STATE|POSTALCODE
I have about 60 million rows on each of the tables and the data is a mix of US and Canadian population.
I have a front-end application that calls a web service and they do a last name and partial zip search. So my query basically has
where CUSTOMERNAME.LNAME = ? and ADDRESS.POSTALCODE LIKE '?%'
They typically provide the first 3 digits of the zip.
The address table has an index on all street/city/state/zip and another one on state and zip.
I did try adding an index exclusively for the zip and forced oracle to use that index on my query but that didn't make any difference.
For returning about 100 rows (I have pagination to only return 100 at a time) it takes about 30 seconds which isn't ideal. What can I do to make this better?
The problem is that the filters you are applying are not very selective and they apply to different tables. This is bad for an old-fashioned btree index. If the content is very static you could try bitmap indexes. More precisely a function based bitmap join index on the first three letter of the last name and a bitmap join index on the postal code column. This assumes that very few people with the whose last name starts with certain letters live in an are with a certain postal code.
CREATE BITMAP INDEX ix_customer_custname ON customer(SUBSTR(cn.lname,1,3))
FROM customer c, customername cn
WHERE c.customerid = cn.customerid;
CREATE BITMAP INDEX ix_customer_postalcode ON customer(SUBSTR(a.postalcode,1,3))
FROM customer c, address a
WHERE c.customerid = a.customerid;
If you are successful you should see the two bitmap indexes becoming AND connected. The execution time should drop to a couple of seconds. It will not be as fast as a btree index.
Remarks:
You may have to play around a bit whether it is more efficient to make one or two indexes and whether the function are helpful useful.
If you decide to do it function based you should include the exact same function calls in the where clause of your query. Otherwise the index will not be used.
DML operations will be considerably slower. This is only useful for tables with static data. Note that DML operations will block whole row "ranges". Concurrent DML operations will run into problems.
Response time will probably still be seconds not instanteously like a BTREE index.
AFAIK this will work only on the enterprise edition. The syntax is untested because I do not have an enterprise db available at the moment.
If this is still not fast enough you can create a materialized view with customerid, last name and postal code and but a btree index on it. But that is kind of expensive, too.

Scan on DynamDB table or Query on secondary global index or a local index (What's the best solution)

I have AWS DynamoDB table called "Users", whose hash key/primary key is "UserID" which consist of emails. It has two attributes, first called "Daily Points" and second "TimeSpendInTheApp". Now I need to run a query or scan on the table, that will give me top 50 users which have the highest points and top 50 users which have spend the most time in the app. Now this query will be executed only once a day by cron aws lambda. I am trying to find the best solutions for this query or scan. For me, the cost is most important than speed/or efficiency. As maintaining secondary global index or a local index on points can be costly operations, as I have to assign Read and Write units for those indexes, which I want to avoid. "Users" table will have a maximum of 100,000 to 150,000 records and on average it will have 50,000 records. What are my best options? Please suggest.
I am thinking, my first option is, I can scan the whole table on Filter Expression for records above certain points (5000 for example), after this scan, if 50 or more than 50 records are found, then simply sort the values and take the top 50 records. If this scan returns no or very less results then reduce the Filter Expression value (3000 for example), then again do the same scan operation. If Filter Expression value (2500 for example) returns too many records, like 5000 or more, then reduce the Filter Expression value. Is this even possible, I guess it would also need to handle pagination. Is it advisable to scan on a table which has 50,000 record?
Any advice or suggestion will be helpful. Thanks in advance.
Firstly, creating indexes for the above use case doesn't simplify the process as it doesn't have solution for aggregation or sorting.
I would export the data to HIVE and run the queries rather than writing code to determine the result especially as it is going to be a batch executed only once per day.
Something like below:-
Create Hive table:-
CREATE EXTERNAL TABLE hive_users(userId string, dailyPoints bigint, timeSpendInTheApp bigint)
STORED BY 'org.apache.hadoop.hive.dynamodb.DynamoDBStorageHandler'
TBLPROPERTIES ("dynamodb.table.name" = "Users",
"dynamodb.column.mapping" = "userId:UserID,dailyPoints:Daily_Points,timeSpendInTheApp:TimeSpendInTheApp");
Queries:-
SELECT dailyPoints, userId from hive_users sort by dailyPoints desc;
SELECT timeSpendInTheApp, userId from hive_users sort by timeSpendInTheApp desc;
Hive Reference

Random exhaustive (non-repeating) selection from a large pool of entries

Suppose I have a large (300-500k) collection of text documents stored in the relational database. Each document can belong to one or more (up to six) categories. I need users to be able to randomly select documents in a specific category so that a single entity is never repeated, much like how StumbleUpon works.
I don't really see a way I could implement this using slow NOT IN queries with large amount of users and documents, so I figured I might need to implement some custom data structure for this purpose. Perhaps there is already a paper describing some algorithm that might be adapted to my needs?
Currently I'm considering the following approach:
Read all the entries from the database
Create a linked list based index for each category from the IDs of documents belonging to the this category. Shuffle it
Create a Bloom Filter containing all of the entries viewed by a particular user
Traverse the index using the iterator, randomly select items using Bloom Filter to pick not viewed items.
If you track via a table what entries that the user has seen... try this. And I'm going to use mysql because that's the quickest example I can think of but the gist should be clear.
On a link being 'used'...
insert into viewed (userid, url_id) values ("jj", 123)
On looking for a link...
select p.url_id
from pages p left join viewed v on v.url_id = p.url_id
where v.url_id is null
order by rand()
limit 1
This causes the database to go ahead and do a 1 for 1 join, and your limiting your query to return only one entry that the user has not seen yet.
Just a suggestion.
Edit: It is possible to make this one operation but there's no guarantee that the url will be passed successfully to the user.
It depend on how users get it's random entries.
Option 1:
A user is paging some entities and stop after couple of them. for example the user see the current random entity and then moving to the next one, read it and continue it couple of times and that's it.
in the next time this user (or another) get an entity from this category the entities that already viewed is clear and you can return an already viewed entity.
in that option I would recommend save a (hash) set of already viewed entities id and every time user ask for a random entity- randomally choose it from the DB and check if not already in the set.
because the set is so small and your data is so big, the chance that you get an already viewed id is so small, that it will take O(1) most of the time.
Option 2:
A user is paging in the entities and the viewed entities are saving between all users and every time user visit your page.
in that case you probably use all the entities in each category and saving all the viewed entites + check whether a entity is viewed will take some time.
In that option I would get all the ids for this topic- shuffle them and store it in a linked list. when you want to get a random not viewed entity- just get the head of the list and delete it (O(1)).
I assume that for any given <user, category> pair, the number of documents viewed is pretty small relative to the total number of documents available in that category.
So can you just store indexed triples <user, category, document> indicating which documents have been viewed, and then just take an optimistic approach with respect to randomly selected documents? In the vast majority of cases, the randomly selected document will be unread by the user. And you can check quickly because the triples are indexed.
I would opt for a pseudorandom approach:
1.) Determine number of elements in category to be viewed (SELECT COUNT(*) WHERE ...)
2.) Pick a random number in range 1 ... count.
3.) Select a single document (SELECT * FROM ... WHERE [same as when counting] ORDER BY [generate stable order]. Depending on the SQL dialect in use, there are different clauses that can be used to retrieve only the part of the result set you want (MySQL LIMIT clause, SQLServer TOP clause etc.)
If the number of documents is large the chance serving the same user the same document twice is neglibly small. Using the scheme described above you don't have to store any state information at all.
You may want to consider a nosql solution like Apache Cassandra. These seem to be ideally suited to your needs. There are many ways to design the algorithm you need in an environment where you can easily add new columns to a table (column family) on the fly, with excellent support for a very sparsely populated table.
edit: one of many possible solutions below:
create a CF(column family ie table) for each category (creating these on-the-fly is quite easy).
Add a row to each category CF for each document belonging to the category.
Whenever a user hits a document, you add a column with named and set it to true to the row. Obviously this table will be huge with millions of columns and probably quite sparsely populated, but no problem, reading this is still constant time.
Now finding a new document for a user in a category is simply a matter of selecting any result from select * where == null.
You should get constant time writes and reads, amazing scalability, etc if you can accept Cassandra's "eventually consistent" model (ie, it is not mission critical that a user never get a duplicate document)
I've solved similar in the past by indexing the relational database into a document oriented form using Apache Lucene. This was before the recent rise of NoSQL servers and is basically the same thing, but it's still a valid alternative approach.
You would create a Lucene Document for each of your texts with a textId (relational database id) field and multi valued categoryId and userId fields. Populate the categoryId field appropriately. When a user reads a text, add their id to the userId field. A simple query will return the set of documents with a given categoryId and without a given userId - pick one randomly and display it.
Store a users past X selections in a cookie or something.
Return the last selections to the server with the users new criteria
Randomly choose one of the texts satisfying the criteria until it is not a member of the last X selections of the user.
Return this choice of text and update the list of last X selections.
I would experiment to find the best value of X but I have in mind something like an X of say 16?

Paging, listing and grouping queries with AppFabric Cache

I read a lot of documents about AppFabric caching but most of them cover simple scenarios.
For example adding city list data or shopping card data to the cache.
But I need adding product catalog data to the cache.
I have 4 tables:
Product (1 million rows), ProductProperty (25 million rows), Property (100 rows), PropertyOption (300 rows)
I display paged search results querying with some filters for Product and ProductProperty tables.
I am creating criteria set over searched result set. For example (4 Items New Product, 34 Items Phone, 26 Items Book etc.)
I query for grouping over Product table with columns of IsNew, CategoryId, PriceType etc.
and also another query for grouping over ProductProperty table with PropertyId and PropertyOptionId columns to get which property have how many items
Therefore to display search results I make one query for search result and 2 for creating criteria list (with counts)
Search result query took 0,7 second and 2 grouping queryies took 1,5 second in total.
When I run load test I reach 7 request per second and %10 dropped by IIS becasue db could not give response.
This is why I want to cache Product and property records.
If I follow items below (in AppFabric);
Create named cache
Create region for product catalog data (a table which have 1 million rows and property table which have 25 million rows)
Tagging item for querying data and grouping.
Can I query with some tags and get 1st or 2nd page of results ?
Can I query with some tags and get counts of some grouping results. (displaying filter options with count)
And do I have to need 3 servers ? Can I provide a solution with only one appfabric server (And of course I know risk.)
Do you know any article or any document explains those scenarios ?
Thanks.
Note:
Some additional test:
I added about 30.000 items to the cache and its size is 900 MB.
When I run getObjectsInRegion method, it tooks about 2 minutes. "IList> dataList = this.DataCache.GetObjectsInRegion(region).ToList();"
The problem is converting to IList. If I use IEnumerable it works very quicly. But How can I get paging or grouping result without converting it to my type ?
Another test:
I tried getting grouping count with 30.000 product item and getting result for grouping took 4 seconds. For example GetObjectByTag("IsNew").Count() and other nearly 50 query like that.
There is, unfortunately, no paging API for AppFabric in V1. Any of the bulk APIs, like GetObjectsByTag, are going to perform the query on the server and stream back all the matching cache entries to the client. From there you can obviously use any LINQ operators you want on the IEnumerable (e.g. Skip/Take/Count), but be aware that you're always pulling the full result set back from the server.
I'm personally hoping that AppFabric V2 will provide support via IQueryable instead of IEnumerable which will give the ability to remote the full request to the server so it could page results there before returning to the client much like LINQ2SQL or ADO.NET EF.
For now, one potential solution, depending on the capabilities of your application, is you can actually calculate some kind of paging as you inject the items into the cache. You can build ordered lists of entity keys representing each page and store those as single entries in the cache which you can pull out in one request and then individually (in parallel) or bulk fetch the items in the list from the cache and join them together with an in-memory LINQ query. If you wanted to trade off CPU for Memory, just cache the actual list of full entities rather than IDs and having to do the join for the entities.
You would obviously have to come up with some kind of keying mechanism to quickly pull these lists of objects from the cache based on the incoming search criteria. Some kind of keying like this might work:
private static string BuildPageListCacheKey(string entityTypeName, int pageSize, int pageNumber, string sortByPropertyName, string sortDirection)
{
return string.Format("PageList<{0}>[pageSize={1};pageNumber={2};sortedBy={3};sortDirection={4}]", entityTypeName, pageSize, pageNumber, sortByPropertyName, sortDirection);
}
You may want to consider doing this kind of thing with a separate process or worker thread that's keeping the cache up to date rather than doing it on demand and forcing the users wait if the cache entry isn't populated yet.
Whether or not this approach ultimately works for you depends on several factors of your application and data. If it doesn't exactly fit your scenarios maybe it will at least help shift your mind into a different way of thinking about solving the problem.

Resources