Finding all items before a certain item in mongomapper - ruby

I want to find all items before a certain item in mongomapper.
For example if I have five User classes saved and I pass in the ID of user 3 then I expect to get back the first two items.
Any ideas how to do this efficiently?

The ObjectID key in Mongo is comprised of a few things, one of them being a timestamp. You can probably use that as a condition on which to query. Look here for more details, http://mongotips.com/b/a-few-objectid-tricks/

Related

How to correctly structure a DynamoDB table for sorting results with no hash key condition?

I am new to DynamoDB so I'm still trying to understand how to use it, but I have what I believe is a simple task but I'm not sure how to address it.
I need to create a table to store categorized questions in which I need to store a click counter. So let's say something like this:
ID: 1
Question: What is this?
Category: General
Clicks: 100
Now, the problem is I need an optimized way to get the most general clicked questions and the most clicked questions by category, let's say a top 10.
In a classic SQL style it would be something like this:
SELECT ID, Question
FROM Questions
ORDER BY Clicks DESC
LIMIT 10
Can anyone point me in the right direction on how to structure the table? I tried the sorting but it always requires a hash key condition, so I don't understand how I can get this done as I need the top 10 results and not a single one.
Thanks in advance!
How are you accumulating the clicks, if you are able to figure out how you accumulate the clicksstream onto the table correctly that will be your answer.
You will need to implement a mechanism that maps incoming clicks to the item record being clicked on and increment it using an atomic counter. With this you will be able to then create a sparse index and sort it in descending order to get what you need.

Make a group by with a where in statement Laravel

I am writing a little script to load an Attendee_logs, based that counts the total of prints for based on the hour.
First I load the id's from the attendees
$allAttendees->pluck('id')->implode(',')
So I get 389832, 321321 from this (this are the id's from the attendees, based on a group).
Now I want to group them by the hour.
But I cannot find out how I add the whereIn statement
$badgesPrintedByDate = DB::table('Attendee_logs')->select(DB::raw('hour(created_at)'), DB::raw('COUNT(id)'))->whereIn('id', [$allAttendees->pluck('id')->implode(',')])->groupBy(DB::raw('hour(created_at)'));
When I do it like this, I get an empty result.
But when I remove the whereIn I get a result.
So my question, How can I count the rows based on the Hour and where I also give the ID's with it :)?
I think this is gonna work:
$badgesPrintedByDate = DB::table('Attendee_logs')->select(DB::raw('hour(created_at)'), DB::raw('COUNT(id)'))->whereIn('id', $allAttendees->pluck('id')->all())->groupBy(DB::raw('hour(created_at)'));
Instead of saying:
$allAttendees->pluck('id')->all()
Which returns an array of ids, you can also say:'
$allAttendees->pluck('id')->values()
Or:
$allAttendees->pluck('id')->toArray();

Duplicate results when sorting using a Spring-Data pageable object on a JPA repository

I have a rest-api that returns a list of users when called. The API uses the org.springframework.data.domain.Pageable to paginate and sort the results. This works by simply passing the pageable to the JPA repository, which then returns the desired page.
For some reason, when sorting by the first name, there is a chance that a duplicate entry will appear, but only if multiple entries have the same first name. However, this never happens when sorting by lastName. Both are simply strings in the entity, there is no discernable difference besides the property name.
Have any of you ever encountered this and if so, how did you fix it?
EDIT: To clarify, there is basically no logic of mine between the controller and the repository. I just pass the pageable through and return the results.
EDIT 2: Solved! Interesting titbit: The reason why the issue was only occurring when sorting by the first name was that only there were there always records that appeared on page 1 and 2, regardless which way the entries were sorted. Lucky me that our testers used that specific test data, or I might have never noticed.
Yes, I have seen that: a record can appear on, say, page 1 and then again on page 2.
This is an issue at the database level. For example 10 items per page and items at positions 10 and 11 have the same value for the property then it can be random which one appears in which position in each resultset.
Therefore apply a secondary sort on a unique property - the database ID for example - in order to ensure consistent ordering across requests.

Count how many times one post has been read

Is it possible to show how many times one post has been read? In WordPress there is a plug-in,https://wordpress.org/plugins/wp-postviews/
I don't know whether there is such a plug-in in Anypic of Parse to count the times?
Of course it will be nice if it can display who has read a post as well.
Thanks
I'm not sure which language you working on.
But anyway you need to create:
Array column in Parse.com
And then just make query to add his name when viewWillAppear
Now you can count the array to get integer number for views and you can display their names from the array.
Two options are;
Add a viewcount column and increment it whenever needed.
Add an actions table which consist all actions within your webpage or app. This way you can store more data(custom analytics) in it like button pressing etc.. When you want to check the viewcount you can just count objects with specific type. For iOS SDK countObjectsInBackgroundWithBlock does this job.

Random exhaustive (non-repeating) selection from a large pool of entries

Suppose I have a large (300-500k) collection of text documents stored in the relational database. Each document can belong to one or more (up to six) categories. I need users to be able to randomly select documents in a specific category so that a single entity is never repeated, much like how StumbleUpon works.
I don't really see a way I could implement this using slow NOT IN queries with large amount of users and documents, so I figured I might need to implement some custom data structure for this purpose. Perhaps there is already a paper describing some algorithm that might be adapted to my needs?
Currently I'm considering the following approach:
Read all the entries from the database
Create a linked list based index for each category from the IDs of documents belonging to the this category. Shuffle it
Create a Bloom Filter containing all of the entries viewed by a particular user
Traverse the index using the iterator, randomly select items using Bloom Filter to pick not viewed items.
If you track via a table what entries that the user has seen... try this. And I'm going to use mysql because that's the quickest example I can think of but the gist should be clear.
On a link being 'used'...
insert into viewed (userid, url_id) values ("jj", 123)
On looking for a link...
select p.url_id
from pages p left join viewed v on v.url_id = p.url_id
where v.url_id is null
order by rand()
limit 1
This causes the database to go ahead and do a 1 for 1 join, and your limiting your query to return only one entry that the user has not seen yet.
Just a suggestion.
Edit: It is possible to make this one operation but there's no guarantee that the url will be passed successfully to the user.
It depend on how users get it's random entries.
Option 1:
A user is paging some entities and stop after couple of them. for example the user see the current random entity and then moving to the next one, read it and continue it couple of times and that's it.
in the next time this user (or another) get an entity from this category the entities that already viewed is clear and you can return an already viewed entity.
in that option I would recommend save a (hash) set of already viewed entities id and every time user ask for a random entity- randomally choose it from the DB and check if not already in the set.
because the set is so small and your data is so big, the chance that you get an already viewed id is so small, that it will take O(1) most of the time.
Option 2:
A user is paging in the entities and the viewed entities are saving between all users and every time user visit your page.
in that case you probably use all the entities in each category and saving all the viewed entites + check whether a entity is viewed will take some time.
In that option I would get all the ids for this topic- shuffle them and store it in a linked list. when you want to get a random not viewed entity- just get the head of the list and delete it (O(1)).
I assume that for any given <user, category> pair, the number of documents viewed is pretty small relative to the total number of documents available in that category.
So can you just store indexed triples <user, category, document> indicating which documents have been viewed, and then just take an optimistic approach with respect to randomly selected documents? In the vast majority of cases, the randomly selected document will be unread by the user. And you can check quickly because the triples are indexed.
I would opt for a pseudorandom approach:
1.) Determine number of elements in category to be viewed (SELECT COUNT(*) WHERE ...)
2.) Pick a random number in range 1 ... count.
3.) Select a single document (SELECT * FROM ... WHERE [same as when counting] ORDER BY [generate stable order]. Depending on the SQL dialect in use, there are different clauses that can be used to retrieve only the part of the result set you want (MySQL LIMIT clause, SQLServer TOP clause etc.)
If the number of documents is large the chance serving the same user the same document twice is neglibly small. Using the scheme described above you don't have to store any state information at all.
You may want to consider a nosql solution like Apache Cassandra. These seem to be ideally suited to your needs. There are many ways to design the algorithm you need in an environment where you can easily add new columns to a table (column family) on the fly, with excellent support for a very sparsely populated table.
edit: one of many possible solutions below:
create a CF(column family ie table) for each category (creating these on-the-fly is quite easy).
Add a row to each category CF for each document belonging to the category.
Whenever a user hits a document, you add a column with named and set it to true to the row. Obviously this table will be huge with millions of columns and probably quite sparsely populated, but no problem, reading this is still constant time.
Now finding a new document for a user in a category is simply a matter of selecting any result from select * where == null.
You should get constant time writes and reads, amazing scalability, etc if you can accept Cassandra's "eventually consistent" model (ie, it is not mission critical that a user never get a duplicate document)
I've solved similar in the past by indexing the relational database into a document oriented form using Apache Lucene. This was before the recent rise of NoSQL servers and is basically the same thing, but it's still a valid alternative approach.
You would create a Lucene Document for each of your texts with a textId (relational database id) field and multi valued categoryId and userId fields. Populate the categoryId field appropriately. When a user reads a text, add their id to the userId field. A simple query will return the set of documents with a given categoryId and without a given userId - pick one randomly and display it.
Store a users past X selections in a cookie or something.
Return the last selections to the server with the users new criteria
Randomly choose one of the texts satisfying the criteria until it is not a member of the last X selections of the user.
Return this choice of text and update the list of last X selections.
I would experiment to find the best value of X but I have in mind something like an X of say 16?

Resources