I'm having trouble sorting a dexie table.
It's likely I'm just not understanding a simple conceptual difference between dexie tables vs dexie collections. So my apologies for asking what's probably a simple question.
I have this code that works just fine:
1 db.transaction('r', db.TABLE1, function() {
2 return db.TABLE1.where('FIELD1').equals('VALUE1').toArray();
3 }).then(function (passedvar) {
4 for (i=0; i < passedvar.length; i++) {
5 // Do things with passedvar[i]
6 }
7 }).catch...
What I'm trying to do is replace line #2 with this code, but it doesn't work:
return db.TABLE1.where('FIELD1').equals('VALUE1').reverse().sortBy('FIELD1').toArray();
So my goal is just to descending sort the results of a .where query. If the code above can be altered to work, then great. If I'm doing it all wrong and there's a better way, that's great too.
Thanks everyone,
Frank
Answering my own question. The problem was that I thought I needed to have .toArray() to be able to iterate through the returned value. I don't.
So, take .toArray() out and everything works exactly as the very well written Dexie documentation promised.
Here's what I ended up with for line #2
return db.TABLE1.where('FIELD1').equals('VALUE1').reverse().sortBy('FIELD1');
That's it!
Frank
Related
So basically how do you handle permissions?
Let's say we have a list of Post(s) of some kind, with an argument first to limit the amount of posts. And only the owner and approved users can read the posts, everyone else can't. What is the best way to implement this?
query {
{
viewer {
posts(first: 10) {
id
text
}
}
}
}
What I'm currently thinking of, is to have a single source of truth to whether a user can read the post or not, and hook it up with the dataloader module.
But, how do I query for exactly 10 posts? If I query my DB for exactly 10 rows, when I then later on filter them with some business logic, then I can get for example 8 posts returned.
A solution is to not put a limit on the query, but that's not very efficient. So what is a good way to go about this?
Inspiration from here
(1) https://dev-blog.apollodata.com/auth-in-graphql-part-2-c6441bcc4302
(2) https://dev-blog.apollodata.com/graphql-at-facebook-by-dan-schafer-38d65ef075af
(1) solved it by
export const DB = {
Lists: {
all: (user_id) => {
return sql.raw("SELECT id FROM lists WHERE owner_id is NULL or owner_id = %s, user_id);
}
}
}
as the query, and then to filter out which rows can be read:
resolve: (root, _, ctx) => {
// factor out data fetching
return DB.Lists.all(ctx.user_id)
.then( lists => {
// enforce auth on each node
return lists.map(auth.List.enforce_read_perm(ctx.user_id));
});
}
So, we can clearly see that it's querying for all the rows, even if, say, the first argument was 1, which is what I'm trying to avoid.
Maybe I'm approaching the problem wrong in some way, as the business logic lives on another layer than the DB one, so there's no way but to query all the rows. Any help appreciated.
For future reference and other people searching for solutions.
Used Dataloader to solve the authentication problem.
Literally implemented what they did in https://dev-blog.apollodata.com/graphql-at-facebook-by-dan-schafer-38d65ef075af and used this boilerplate repo as guidance. Not much more to say than that.
After looking at some SO questions and issues on RethinkDB github, I failed to come to a clear conclusion if atomic Upsert is possible?
Essentially I would like to perform the same operation as ZINCRBY using Redis.
If member does not exist in the sorted set, it is added with increment
as its score (as if its previous score was 0.0). If key does not
exist, a new sorted set with the specified member as its sole member
is created.
The current implementation appears to differ from almost all databases that I have used. With the data being replaced or inserted not updated. This is a simple use case, like update the last visit, update the number of clicks, update a product quantity. So I must be missing something very obvious, because I cannot see a simple way to do this.
Yes, it is possible. After get on the key, perform an atomic replace. Something like this might work:
function set_or_increment_score(player, points){
return r.table('scores').get(player).replace(
row =>
{ id: player,
score: r.branch(
row.eq(null),
points,
row('score').add(points))
});
}
It has the following behaviour:
> set_or_increment_score("alice", 1).run(conn)
{ inserted: 1 }
> set_or_increment_score("alice", 2).run(conn)
{ replaced: 1 }
It works because get returns null when the document doesn't exist, and a replace on a non-existing document tuns into an insert. See the documentation for replace
So I end up using the following code to go around the no Update issue.
r.db("test").table("t").insert(
{id:"A", type:"player", species:"warrior", score:0, xp:0, armor:0},
{conflict: function(id, oldDoc, newDoc) {
return newDoc.merge(oldDoc).merge(
{armor: oldDoc("armor").add(1)});
}
}
)
Do you think this is more readable/elegant or do you see any issues with the code compared to your sample?
I have a table "posts" with "timestamp".
Now I want from all user that have more than 1 post, to get all posts EXCEPT the most recent post.
With this query I can successfully check the users who have more than 1 post:
r.table("post")
.group('userId')
.count()
.ungroup()
.filter(r.row("reduction").gt(1))
I can get the last post of a specific user by doing
r.table("post")
.filter({userId: 'xxx'})
.max('timestamp')
Now I need to tie those somehow together, and then compare the timestamp from each row with the max('timestamp') to see if they are not equal. The following is what I had but it's obviously wrong
.filter(r.row('timestamp').ne(r.row('timestamp').max('timestamp')('timestamp')))
Any advice how I bring all this together?
Something like this ought to work:
r.table('post')
.group({
index: 'userId'
})
.ungroup()
.filter(function(doc) {
return doc('reduction').count().gt(1)
})
.group('group')('reduction')
.nth(0)
.orderBy(
r.desc('timestamp')
).skip(1)
With reservations for syntax errors; I built this query using python and then converted it to javascript. Especially unsure about the .nth(0) part, never used it in javascript. In python it's just [0].
Below is a snippit of simplified version of a problem I am having with Entity Framework v4 where the first load seems to take around 30 seconds on a table with 36 rows!
After that it is very quick to load until you change the search params, then it takes 30 seconds again but once that combination of search params has been done once it is quick.
This is repeated each time a different combination of params is used.
IQueryable<User> result= GetAllUsers();
if (!String.IsNullOrWhiteSpace(firstNameSearchParam))
{
result = result.Where(u => u.firstname.contains(firstNameSearchParam))
}
if (!String.IsNullOrWhiteSpace(lastNameSearchParam))
{
result = result.Where(u => u.lastname.contains(lastNameSearchParam))
}
Var ret = result.ToArray();
Any ideas would be really appreciated.
I'm not sure if pre-compiling the views will help. I tried but couldn't get it to work.
how long it takes when you execute the query from the sql side?
you can use this idea mentioned by scott here on this link Dynamic Linq
and i think this will work with you also with Entity Framework, and there is another idea that you can use the entity framework metadata
hope that this will help you
regards
I have this query:
var iterator = criteria.binaryAssetBranchNodeIds.GetEnumerator();
iterator.MoveNext();
var binaryAssetStructures = from bas in db.BinaryAssetStructures
where bas.BinaryAssetStructureId == iterator.Current
select bas;
When I iterate over the binaryAssetStructureIds with a foreach loop no problems occur. When I try this
var binaryAssetStructure = binaryAssetStructures.ElementAt(0);
I get following error:
Unable to cast object of type 'System.Linq.Expressions.MethodCallExpression' to type 'SubSonic.Linq.Structure.ProjectionExpression'
First() for example does work... What am I missing here...
I don't know SubSonic at all, but FWIW a similar issue exists with the Entity Framework. In that case it boils down to the fact that there's no direct translation of ElementAt to SQL.
First() can be easily translated to SELECT TOP 1 FROM ... ORDER BY ..., but the same is not easily expressed for ElementAt.
You could argue that e.g. ElementAt(5) should be translated to SELECT TOP 5 FROM ... ORDER BY ... and then the first four elements simply discarded, but that doesn't work very well if you ask for ElementAt(100000).
In EF, you can partialle overcome this issue forcing the expression to be evaluated first, which can be done with calls to AsEnumerable, ToList or ToArray.
For example
var binaryAssetStructure = binaryAssetStructures.AsEnumerable().ElementAt(0);
I hope this helps although not explicitly directed at SubSonic.