With GWan's key/value store can more than one index be created for an entity? - key-value-store

For GWan's key value store can I create more than one index for a given single type of entity?
Also can I query more than one index at once such as find a item with age > 5 and height > 100 if I indexed age and height.

can I create more than one index for a given single type of entity?
If you mean, having several indexes for multiple fields in a record (more than one value for a key) then yes, you can. Just look at the kv.c example: http://gwan.ch/source/kv.c (for any reason, the Stackoverflow text formatting menu is not displayed, so I wrote the link in the text rather than embedded - also, if someone could PLEASE stop the captcha that I must enter to reply to each question, that would be nice).
can I query more than one index at once such as find a item with age > 5 and height > 100 if I indexed age and height?
You can easily write a function to do that and find the records that appear in the first search on the first index AND in the second search for the second index.
This is very fast as the results are returned sorted.

Related

Calculated Field Expression, sum of field with specific values

I have a Feature field that has 4 different values (Features 1-4). There's also a use case field that has a set of values (usecase 1-4) and another set of values (usecase 5-8).
I would like to find the sum of (usecase1-4), that also contains feature1-2 from the Feature field. Same with (usecase5-8) that contains feature3-4.
screenshot for a visual:
enter image description here
How would I accomplish this?
I tried sumOver(sum(count, [Feature], [usecase])) but that gave me the same numbers of the total column. Am I thinking about the partitions wrong?

Performance in Elasticsearch

I am now beginning with elasticsearch.
I have two cases of data in a relational database, but in both cases I want to find the records from the first table as quickly as possible.
Case 1: binding tables 1: n (example Invoice - Items of invoice)
Have I been to save the data to the elasticsearch system: all rows from slave or master_id and group all data from slave to single string?
Case 2: binding tables n: 1 (example Invoice - Customer)
Have I been to save the data as in case 1 to independent index or add next column to previous index?
The problem is that sometimes I only need to search for records that contain a specific invoice item, sometimes a specific customer, and sometimes both an invoice item and a customer.
Should I create one index containing all the data, or all 3 variants?
Another problem, is it possible to speed up the search in elasticsearch somehow, when the stored data is eg only EAN (13 digit number) but not plain text?
Thank
Jaroslav
You should denormalize and just use single index for all your data(invoices, items and customer) for the best performance, Elasticsearch although supports joins and parent-child relationship but their performance is no where near to when all the data is part of single index and quick benchmark test on your data will prove it easily.

best solution for make a ranged search on 2( or N) sorted set based on their score on Redis

i have some index(sorted set) containing key name sorted with timestamp as score, these index are for searching purpose , for example one index apple and one index red , apple contain all key name referencing an apple and red all key referencing a red thing.
All this is sorted with the timestamp of the creation of the main key, so i want to do search with that.
For one fild it's not a problem , with pagination i do zrange on apple for example to get all apple within range of pagination sorted by date, but my problem are when i want to combine 2 field.
For example if want all red apple, i can do it sure, but i must use a zunionstore and zrange(too long) or get all of the 2 index to perform a filter based on date, and i search the fastest solution to do that.
thank you for reading :)
The approach you described - ZUNIONSTORE followed by a ZRANGE is the most efficient within Redis core. Alternatively, you could use RediSearch for robuster indexing and searching abilities.

DAX: Use measure outcome to populate calculated column without recalculating measure per row

I have two tables in PowerBI. One called 'Fact_WorstInstance' contains rows of (Index,Instance). For example:
1,2
2,1
3,2
One called 'Fact_AllInstances' contains rows of (Index,Instance,Value). For example:
1,1,'Red'
1,2,'Green'
2,1,'Amber'
2,2,'Red'
2,3,'Brown'
3,1,'Green'
3,2,'Blue'
The first table is essentially a pointer to the worst entry in the second table for the given index (as categorised by some external system).
There is a slicer on which Indexes are visible to the user.
What I want to do is find the worst instance value for the highest visible Index in the 'Fact_WorstInstance' table, and then get all the Index and Value rows from the 'Fact_AllInstances' table for that Instance.
For example, if the slicer isnt filtering then (3,2) should be the active row from from the 'Fact_WorstInstance' table and this should be used to get Instance 2 from the 'Fact_AllInstances' table
1,2,'Green'
2,2,'Red'
3,2,'Blue'
from the 'Fact_AllInstances' table.
I tried to do this in many different ways, by creating a measure on the 'Fact_WorstInstance' which gives the highest visible row. And then use this measure to create a calculated column on the 'Fact_AllInstances', with 1 for worst and 0 for not worst. And then use this calculated column as a filter in PowerBI.
The measure itself gives the expected value. The problem I have is when the measure is used to create the calculated column, I cannot find a way to stop the Index being filtered based on the row of calculated column - and therefore the measure outcome changes for each row.
My measure:
Worst Entry = CALCULATE(FIRSTNONBLANK(Fact_WorstInstance[Instance],1),filter(ALLSELECTED(Fact_WorstInstance),Fact_WorstInstance[Index]=MAX(Fact_WorstInstance[Index])))
My column:
WorstColumn = if(Fact_AllInstances[Instance]=[Worst Entry],1,0)
So instead of getting the output above, I get
1,2,'Green'
2,1,'Amber' --> because for Index 2, the measure gives index 1 as worst
3,2,'Blue'
This is a possible solution you might want to implement.
First of all, calculated columns are not affected by slicers/page filters, you will need to create a measure for that, so the way your are appraching the problem won't work.
Create an additional calculated table that holds unique instances values. In Power BI, Modeling tab there is a icon for creating a New Table, where you can use an expression to produce the table.
Use this expression:
IsntancesCalcTable = VALUES(Fact_WorstInstance[Instance])
Now you have a table called InstancesCalcTable in your model.
Drag the Instance column in the InstancesCalcTable and drop it in the Instance column of the Fact_WorstInstance, this will create a relationship between InstancesCalcTable and Fact_WorstInstance via Instance. A line between both tables will be drawn in the Relationships view, double click that line and you will see the Edit Relationship window.
Make sure it looks like this:
Then do the same for creating the relationship between InstancesCalcTable and Fact_AllInstances.
You will end with a model like this:
Then you can use Index column in the Fact_WorstInstance table, in a slicer and it will filter the Fact_AllInstances table to get only the instances selected.
However if you don't have any filter all rows in Fact_AllInstances will be shown.

Random exhaustive (non-repeating) selection from a large pool of entries

Suppose I have a large (300-500k) collection of text documents stored in the relational database. Each document can belong to one or more (up to six) categories. I need users to be able to randomly select documents in a specific category so that a single entity is never repeated, much like how StumbleUpon works.
I don't really see a way I could implement this using slow NOT IN queries with large amount of users and documents, so I figured I might need to implement some custom data structure for this purpose. Perhaps there is already a paper describing some algorithm that might be adapted to my needs?
Currently I'm considering the following approach:
Read all the entries from the database
Create a linked list based index for each category from the IDs of documents belonging to the this category. Shuffle it
Create a Bloom Filter containing all of the entries viewed by a particular user
Traverse the index using the iterator, randomly select items using Bloom Filter to pick not viewed items.
If you track via a table what entries that the user has seen... try this. And I'm going to use mysql because that's the quickest example I can think of but the gist should be clear.
On a link being 'used'...
insert into viewed (userid, url_id) values ("jj", 123)
On looking for a link...
select p.url_id
from pages p left join viewed v on v.url_id = p.url_id
where v.url_id is null
order by rand()
limit 1
This causes the database to go ahead and do a 1 for 1 join, and your limiting your query to return only one entry that the user has not seen yet.
Just a suggestion.
Edit: It is possible to make this one operation but there's no guarantee that the url will be passed successfully to the user.
It depend on how users get it's random entries.
Option 1:
A user is paging some entities and stop after couple of them. for example the user see the current random entity and then moving to the next one, read it and continue it couple of times and that's it.
in the next time this user (or another) get an entity from this category the entities that already viewed is clear and you can return an already viewed entity.
in that option I would recommend save a (hash) set of already viewed entities id and every time user ask for a random entity- randomally choose it from the DB and check if not already in the set.
because the set is so small and your data is so big, the chance that you get an already viewed id is so small, that it will take O(1) most of the time.
Option 2:
A user is paging in the entities and the viewed entities are saving between all users and every time user visit your page.
in that case you probably use all the entities in each category and saving all the viewed entites + check whether a entity is viewed will take some time.
In that option I would get all the ids for this topic- shuffle them and store it in a linked list. when you want to get a random not viewed entity- just get the head of the list and delete it (O(1)).
I assume that for any given <user, category> pair, the number of documents viewed is pretty small relative to the total number of documents available in that category.
So can you just store indexed triples <user, category, document> indicating which documents have been viewed, and then just take an optimistic approach with respect to randomly selected documents? In the vast majority of cases, the randomly selected document will be unread by the user. And you can check quickly because the triples are indexed.
I would opt for a pseudorandom approach:
1.) Determine number of elements in category to be viewed (SELECT COUNT(*) WHERE ...)
2.) Pick a random number in range 1 ... count.
3.) Select a single document (SELECT * FROM ... WHERE [same as when counting] ORDER BY [generate stable order]. Depending on the SQL dialect in use, there are different clauses that can be used to retrieve only the part of the result set you want (MySQL LIMIT clause, SQLServer TOP clause etc.)
If the number of documents is large the chance serving the same user the same document twice is neglibly small. Using the scheme described above you don't have to store any state information at all.
You may want to consider a nosql solution like Apache Cassandra. These seem to be ideally suited to your needs. There are many ways to design the algorithm you need in an environment where you can easily add new columns to a table (column family) on the fly, with excellent support for a very sparsely populated table.
edit: one of many possible solutions below:
create a CF(column family ie table) for each category (creating these on-the-fly is quite easy).
Add a row to each category CF for each document belonging to the category.
Whenever a user hits a document, you add a column with named and set it to true to the row. Obviously this table will be huge with millions of columns and probably quite sparsely populated, but no problem, reading this is still constant time.
Now finding a new document for a user in a category is simply a matter of selecting any result from select * where == null.
You should get constant time writes and reads, amazing scalability, etc if you can accept Cassandra's "eventually consistent" model (ie, it is not mission critical that a user never get a duplicate document)
I've solved similar in the past by indexing the relational database into a document oriented form using Apache Lucene. This was before the recent rise of NoSQL servers and is basically the same thing, but it's still a valid alternative approach.
You would create a Lucene Document for each of your texts with a textId (relational database id) field and multi valued categoryId and userId fields. Populate the categoryId field appropriately. When a user reads a text, add their id to the userId field. A simple query will return the set of documents with a given categoryId and without a given userId - pick one randomly and display it.
Store a users past X selections in a cookie or something.
Return the last selections to the server with the users new criteria
Randomly choose one of the texts satisfying the criteria until it is not a member of the last X selections of the user.
Return this choice of text and update the list of last X selections.
I would experiment to find the best value of X but I have in mind something like an X of say 16?

Resources