I have data lets say about names of players of different sports team for different countries. What data structure should I use to store this dataset , if my queries would involve finding if particular player plays for a particular team of a country.
Which of the following is a better data structure and why ?
1) HashMap<Country, HashMap<Sport, HashSet<PlayerName>>>
OR
2)
class CountrySport{
String Country
String Sport
}
//override hashcode and equals methods
HashMap<CountrySport, HashSet<PlayerName>>
The lookup in both cases would be O(1) , then which is a better data structure in this case ?
Related
Consider two types of documents Company and Person:
Company has 2 fields:
name of type String
employees of type List of Person
Person has 2 fields:
name of type String
city of type String
How can I create a query where I find all the companies who have at least N employees in a given city ?
EDIT: In other words, How is it possible to do something like this with CouchBase Lite.
I think there are several ways to approach this.
One suggestion is to create a view that, when given a Company document, emits a key/value pair via the map stage. The key could be a map containing the company name and city, and the value could be anything (e.g. employee name). Then add a reduce function that sums all the index entries (that's what the first part creates) with the same key.
So the result of the reduce stage output for the view is the total number of employees keyed by company + city. You can then do queries to get your result.
Views and Queries are really powerful, but can take some thought. Focus on getting the information you need out of the View, so you can query flexibly.
Take a look at the View and Query documentation for more details.
Say we were modeling Users and Friends, and Friends have a type.
We could model it in Oracle like:
User: id, name, sex, age
Friendship: user_id, friend_id, type
So in HBase, we could do:
(this first model is from here, which is recommended by the HBase FAQ)
Table: Users
RowKey = <user_id>
Column Family = Info; Columns = "Name", "Sex", "Age"
Column Family = Friend; Columns = "Friend:<user_id>"=type
(where "Friend:"=type could be one more more user_ids)
or
Table: Users
RowKey = <user_id>
Column Family = Info; Columns = "Name", "Sex", "Age", "Friends"
(where "Friends" is a JSON string in the form [{user_id:, type:}, ...]
However, if a friend did not have a type, the second model could simply be [user_id:<user_id>, ...]. What would the first model do if friends didn't have a type?
What are the pros and benefits of either approach?
One column with a list of values breaks normalization rules. If you don't know what those are or why they're important, please do some research.
I don't think either model is correct. A one to many relationship ought to be modeled correctly. Both your schemas break normalization rules.
It really depends on how many friends you have and what your read and write access pattern is.
In the first case, with a friend per column you can add a friend without reading all of the other friends. However, you also get a separate timestamp value per friend and thus increase the total storage requirement per friend.
Also, if you don't always read the friends when you read a user, the first case doesn't require you to load the friends. You can do a single column family scan and avoid all the extra IO.
The downside to more column families is you have more MemStores and therefore more memory is required for your regions. It also means more non-sequential disk flushing as each column family is a separate disk flush.
I want to match in-memory entities to data from DB-tables and return a new in-memory DTO with a subset of that matched information. Now, matching involves two columns, thus I am building a new key on the fly. This works, as long as I execute the queries before building the keys, effectively using Linq-To-Objects for the matching.
When not executing the query right away, I receive a runtime exception as described by this MSDN article.
Here is my code and data model, simplified. I have
Rooms (as IEnumerable<Room>, already in memory)
Areas (as IEnumerable<Room>, already in memory)
Alarms (from the DB, as IQueryable from the context)
Alarms are tied to Areas and LocationIds. Rooms can have multiple Areas, and have one LocationId.
I want to build a set of Alarms occurred in a set of Rooms. This involves matching the Alarm's Area and LocationsId to each Room's LocationId and the Areas.
from area in allAreas
let alarmKey = area.AreaName + area.Room.LocationId //AreaName is String, LocationId is integer
//....
However, this line involves a not supported cast form int to String. How to create the key?
If you don't mind a number of leading spaces in LocationId you can do
let alarmKey = area.AreaName +
SqlFunctions.StringConvert((double)area.Room.LocationId)
SqlFunctions is in System.Data.Objects.SqlClient.
Suppose I have a large (300-500k) collection of text documents stored in the relational database. Each document can belong to one or more (up to six) categories. I need users to be able to randomly select documents in a specific category so that a single entity is never repeated, much like how StumbleUpon works.
I don't really see a way I could implement this using slow NOT IN queries with large amount of users and documents, so I figured I might need to implement some custom data structure for this purpose. Perhaps there is already a paper describing some algorithm that might be adapted to my needs?
Currently I'm considering the following approach:
Read all the entries from the database
Create a linked list based index for each category from the IDs of documents belonging to the this category. Shuffle it
Create a Bloom Filter containing all of the entries viewed by a particular user
Traverse the index using the iterator, randomly select items using Bloom Filter to pick not viewed items.
If you track via a table what entries that the user has seen... try this. And I'm going to use mysql because that's the quickest example I can think of but the gist should be clear.
On a link being 'used'...
insert into viewed (userid, url_id) values ("jj", 123)
On looking for a link...
select p.url_id
from pages p left join viewed v on v.url_id = p.url_id
where v.url_id is null
order by rand()
limit 1
This causes the database to go ahead and do a 1 for 1 join, and your limiting your query to return only one entry that the user has not seen yet.
Just a suggestion.
Edit: It is possible to make this one operation but there's no guarantee that the url will be passed successfully to the user.
It depend on how users get it's random entries.
Option 1:
A user is paging some entities and stop after couple of them. for example the user see the current random entity and then moving to the next one, read it and continue it couple of times and that's it.
in the next time this user (or another) get an entity from this category the entities that already viewed is clear and you can return an already viewed entity.
in that option I would recommend save a (hash) set of already viewed entities id and every time user ask for a random entity- randomally choose it from the DB and check if not already in the set.
because the set is so small and your data is so big, the chance that you get an already viewed id is so small, that it will take O(1) most of the time.
Option 2:
A user is paging in the entities and the viewed entities are saving between all users and every time user visit your page.
in that case you probably use all the entities in each category and saving all the viewed entites + check whether a entity is viewed will take some time.
In that option I would get all the ids for this topic- shuffle them and store it in a linked list. when you want to get a random not viewed entity- just get the head of the list and delete it (O(1)).
I assume that for any given <user, category> pair, the number of documents viewed is pretty small relative to the total number of documents available in that category.
So can you just store indexed triples <user, category, document> indicating which documents have been viewed, and then just take an optimistic approach with respect to randomly selected documents? In the vast majority of cases, the randomly selected document will be unread by the user. And you can check quickly because the triples are indexed.
I would opt for a pseudorandom approach:
1.) Determine number of elements in category to be viewed (SELECT COUNT(*) WHERE ...)
2.) Pick a random number in range 1 ... count.
3.) Select a single document (SELECT * FROM ... WHERE [same as when counting] ORDER BY [generate stable order]. Depending on the SQL dialect in use, there are different clauses that can be used to retrieve only the part of the result set you want (MySQL LIMIT clause, SQLServer TOP clause etc.)
If the number of documents is large the chance serving the same user the same document twice is neglibly small. Using the scheme described above you don't have to store any state information at all.
You may want to consider a nosql solution like Apache Cassandra. These seem to be ideally suited to your needs. There are many ways to design the algorithm you need in an environment where you can easily add new columns to a table (column family) on the fly, with excellent support for a very sparsely populated table.
edit: one of many possible solutions below:
create a CF(column family ie table) for each category (creating these on-the-fly is quite easy).
Add a row to each category CF for each document belonging to the category.
Whenever a user hits a document, you add a column with named and set it to true to the row. Obviously this table will be huge with millions of columns and probably quite sparsely populated, but no problem, reading this is still constant time.
Now finding a new document for a user in a category is simply a matter of selecting any result from select * where == null.
You should get constant time writes and reads, amazing scalability, etc if you can accept Cassandra's "eventually consistent" model (ie, it is not mission critical that a user never get a duplicate document)
I've solved similar in the past by indexing the relational database into a document oriented form using Apache Lucene. This was before the recent rise of NoSQL servers and is basically the same thing, but it's still a valid alternative approach.
You would create a Lucene Document for each of your texts with a textId (relational database id) field and multi valued categoryId and userId fields. Populate the categoryId field appropriately. When a user reads a text, add their id to the userId field. A simple query will return the set of documents with a given categoryId and without a given userId - pick one randomly and display it.
Store a users past X selections in a cookie or something.
Return the last selections to the server with the users new criteria
Randomly choose one of the texts satisfying the criteria until it is not a member of the last X selections of the user.
Return this choice of text and update the list of last X selections.
I would experiment to find the best value of X but I have in mind something like an X of say 16?
I'm fairly new to the more complex parts of Core Data.
My application has a core data store with 15K rows. There is a single entity.
I need to display a subset of those rows in a table view filtered on a calculated search criteria, and for each row displayed add a value that I calculate in real time but don't store in the entity.
The calculation needs to use a couple of values supplied by the user.
A hypothetical example:
Entity: contains fields "id", "first", and "second"
User inputs: 10 and 20
Search / Filter Criteria: only display records where the entity field "id" is a prime number between the two supplied numbers. (I need to build some sort of complex predicate method here I assume?)
Display: all fields of all records that meet the criteria, along with a derived field (not in the the core data entity) that is the sum of the "id" field and a random number, so each row in the tableview would contain 4 fields:
"id", "first", "second", -calculated value-
From my reading / Googling it seems that a transient property might be the way to go, but I can't work out how to do this given that the search criteria and the resultant property need to calculate based on user input.
Could anyone give me any pointers that will help me implement this code? I'm pretty lost right now, and the examples I can find in books etc. don't match my particular needs well enough for me to adapt them as far as I can tell.
Thanks
Darren.
The first thing you need to do is to stop thinking in terms of fields, rows and columns as none of those structures are actually part of Core Data. In this case, it is important because Core Data supports arbitrarily complex fetches but the sqlite store does not. So, if you use a sqlite store your fetches are restricted those supported by SQLite.
In this case, predicates aimed at SQLite can't perform complex operations such as calculating whether an attribute value is prime.
The best solution for your first case would be to add a boolean attribute of isPrime and then modify the setter for your id attribute to calculate whether the set id value is prime or not and then set the isPrime accordingly. That will be store in the SQLite store and can be fetched against e.g. isPrime==YES &&((first<=%#) && (second>=%#))
The second case would simply use a transient property for which you would supply a custom getter to calculate its value when the managed object was in memory.
One often overlooked option is to not use an sqlite store but to use an XML store instead. If the amount of data is relatively small e.g. a few thousand text attributes with a total memory footprint of a few dozen meg, then an XML store will be super fast and can handle more complex operations.
SQLite is sort of the stunted stepchild in Core Data. It's is useful for large data sets and low memory but with memory becoming ever more plentiful, its loosing its edge. I find myself using it less these days. You should consider whether you need sqlite in this particular case.