How to write queries traversing n hops in GQLAlchemy? - memgraphdb

I have a query that I would like to model using GQLAlchemy:
MATCH (n)-[2]->(c)
RETURN Count()
How can I do this?

You can do the following within GQLAlchemy to traverse using hops.
An alternative would be to use the current syntax and a small hack. This is how the following query could be implemented in the query builder:
MATCH ({name: 'United Kingdom'})<-[:LIVING_IN*1..2]-(n) RETURN n;
match().node(name="United Kingdom").from_(edge_label="LIVING_IN*1..2").node(variable="n").return_({"n", "n"}).execute()

Related

Asp.net Core 5.0 Linq Take(1).ElementAt(index)

I have a long Linq query and I'm trying to take one data in any index of that query.
My query is :
public IEnumerable<WebFairField> WebFairFieldForFair(Guid ID,int index)
{
return TradeTurkDBContext.WebFairField.Where(x => x.DataGuidID==ID)
.Include(x => x.Category)
.ThenInclude(x=>x.MainCategory).AsSplitQuery()
//
.Include(x=>x.FairSponsors)
.ThenInclude(x=>x.Company)
.ThenInclude(x=>x.FileRepos).AsSplitQuery()
//
.Include(x=>x.WebFairHalls.Take(1).ElementAt(index)) //Thats the point where i stuck*
.ThenInclude(x=>x.HallSeatingOrders)
.ThenInclude(x=>x.Company)
.ThenInclude(x=>x.FileRepos).AsSplitQuery()
//
.Include(x=>x.HallExpertComments).AsSplitQuery()
.Include(x=>x.Products).AsSplitQuery()
.Include(x=>x.FairSponsors).AsSplitQuery()
.AsNoTrackingWithIdentityResolution()
.ToList();
}
when I do that it gives me an error : Collection navigation access can be filtered by composing Where, OrderBy,ThenBy,Skip or Take operations.
I know I have to sort that data but I don't know how to do it. Can anyone show me how should I sort my data of that query ?
Thanks for any suggestion!!
The error
As you have mentioned, the line of
.Include(x=>x.WebFairHalls.Take(1).ElementAt(index)) //Thats the point where i stuck*
is causing the error. Basically you Take the first element and then try to call ElementAt. This is a problem technically, because you need to convert your collection to IEnumerable in order to be able to call ElementAt.
It is also a logical error, since if you take a single element, then it does not make sense to try and call ElementAt for it.
Skip
As Guru Strong pointed out, you can Skip, as Skip(index - 1).Take(1) which skips the first index - 1 elements and then takes the next one, which is the index'th element.
Sort
If you need to sort, call OrderBy. If you need several sorting criteria, then use ThenBy.

How to properly perform like queries with Quickbase

I am working with quicktable queries and everything seems to be fine.
Now I want to perform queries using like operators. For instance in PHP I can do something like:
$data ='content to search';
$stmt = $db->prepare('SELECT * FROM members where name like :name OR email like :email limit 20');
$stmt->execute(array(
':name' => '%'.$data.'%',
':email' => '%'.$data.'%',
));
Now in quick table, I have tried using CT, EX or HAS parameter etc with OR Operators. Only CT gives nearby result but not exact as per code below.
//Email = 7
//name =8
{
"from": "tableId",
"where": "{7.CT.'nancy#gmail.com'}OR{8.CT.'nancy'}"
}
Is there any way I can obtain a better search with like operators with Quickbase. The documentation here does not cover that.
CT is the closest string comparison operator in Quick Base to LIKE in SQL, but since you can't use wildcards in Quick Base queries you might need to group multiple query strings to achieve the same result. The is also a SW operator that can sometimes come in helpful for comparing parts of a strings.

Search multiple indices against some index pattern using NEST

I searched NEST docs but seems to cant find a proper answer for it.
My question is how to search multiple indices against some index pattern using NEST? e.g
if I have indices with following names in Elasticsearch DB
media-2017-10, media-2018-03, media-2018-04
For specifying my selected indices, I need to use wild card character * like this:
client.Search<Media>(s => s
.Index("media-*")
. query goes here .....
Is it possible in NEST ?
Yes, this works. Try it :)
.Index(...) accepts wildcard indices
You can also search in multiple indices in that way:
var allIndices = new[] {
"media-*",
"docs-*",
"common-*"
};
Nest.Indices allIndices = allIndices;
return _elasticClient
.SearchAsync<EsBaseModel>(s => s
.Index( allIndices)
.Size(_esConfig.MaxCallIDsSize)
.RequestConfiguration(r => r.RequestTimeout(TimeSpan.FromMinutes(5)))
.Query(q =>
q.Match(m => m.Field("fieldname").Query(condition))
));
Steps:
Just create an array with string indices.
Indices can be explicit or implicit using any pattern supported in Nest client docs.
Notice - neet to put attention to optimize the searching, since it could take a while to search in all the indices that you've provided.
(optimize can be achieved by ignoring very old dates, limit the results, etc...)

Why is Entity Framework's AsEnumerable() downloading all data from the server?

What is the explanation for EF downloading all result rows when AsEnumerable() is used?
What I mean is that this code:
context.Logs.AsEnumerable().Where(x => x.Id % 2 == 0).Take(100).ToList();
will download all the rows from the table before passing any row to the Where() method and there could be millions of rows in the table.
What I would like it to do, is to download only enough to gather 100 rows that would satisfy the Id % 2 == 0 condition (most likely just around 200 rows).
Couldn't EF do on demand loading of rows like you can with plain ADO.NET using Read() method of SqlDataReader and save time and bandwidth?
I suppose that it does not work like that for a reason and I'd like to hear a good argument supporting that design decision.
NOTE: This is a completely contrived example and I know normally you should not use EF this way, but I found this in some existing code and was just surprised my assumptions turned out to be incorrect.
The short answer: The reason for the different behaviors is that, when you use IQueryable directly, a single SQL query can be formed for your entire LINQ query; but when you use IEnumerable, the entire table of data must be loaded.
The long answer: Consider the following code.
context.Logs.Where(x => x.Id % 2 == 0)
context.Logs is of type IQueryable<Log>. IQueryable<Log>.Where is taking an Expression<Func<Log, bool>> as the predicate. The Expression represents an abstract syntax tree; that is, it's more than just code you can run. Think of it as being represented in memory, at runtime, like this:
Lambda (=>)
Parameters
Variable: x
Body
Equals (==)
Modulo (%)
PropertyAccess (.)
Variable: x
Property: Id
Constant: 2
Constant: 0
The LINQ-to-Entities engine can take context.Logs.Where(x => x.Id % 2 == 0) and mechanically convert it into a SQL query that looks something like this:
SELECT *
FROM "Logs"
WHERE "Logs"."Id" % 2 = 0;
If you change your code to context.Logs.Where(x => x.Id % 2 == 0).Take(100), the SQL query becomes something like this:
SELECT *
FROM "Logs"
WHERE "Logs"."Id" % 2 = 0
LIMIT 100;
This is entirely because the LINQ extension methods on IQueryable use Expression instead of just Func.
Now consider context.Logs.AsEnumerable().Where(x => x.Id % 2 == 0). The IEnumerable<Log>.Where extension method is taking a Func<Log, bool> as a predicate. That is only runnable code. It cannot be analyzed to determine its structure; it cannot be used to form a SQL query.
Entity Framework and Linq use lazy loading. It means (among other things) that they will not run the query until they need to enumerate the results: for instance using ToList() or AsEnumerable(), or if the result is used as an enumerator (in a foreach for instance).
Instead, it builds a query using predicates, and returns IQueryable objects to further "pre-filter" the results before actually returning them. You can find more infos here for instance. Entity framework will actually build a SQL query depending on the predicates you have passed it.
In your example:
context.Logs.AsEnumerable().Where(x => x.Id % 2 == 0).Take(100).ToList();
From the Logs table in the context, it fetches all, returns a IEnumerable with the results, then filters the result, takes the first 100, then lists the results as a List.
On the other hand, just removing the AsEnumerable solves your problem:
context.Logs.Where(x => x.Id % 2 == 0).Take(100).ToList();
Here it will build a query/filter on the result, then only once the ToList() is executed, query the database.
It also means that you can dynamically build a complex query without actually running it on the DB it until the end, for instance:
var logs = context.Logs.Where(a); // first filter
if (something) {
logs = logs.Where(b); // second filter
}
var results = logs.Take(100).ToList(); // only here is the query actually executed
Update
As mentionned in your comment, you seem to already know what I just wrote, and are just asking for a reason.
It's even simpler: since AsEnumerable casts the results to another type (a IQueryable<T> to IEnumerable<T> in this case), it has to convert all the results rows first, so it has to fetch the data first. It's basically a ToList in this case.
Clearly, you understand why it's better to avoid using AsEnumerable() the way you do in your question.
Also, some of the other answers have made it very clear why calling AsEnumerable() changes the way the query is performed and read. In short, it's because you are then invoking IEnumrable<T> extension methods rather than the IQueryable<T> extension methods, the latter allowing you to combine predicates before executing the query in the database.
However, I still feel that this doesn't answer your actual question, which is a legitimate question. You said (emphasis mine):
What I mean is that this code:
context.Logs.AsEnumerable().Where(x => x.Id % 2 == 0).Take(100).ToList();
will download all the rows from the table before passing any row to the Where() method and there could be millions of rows in the table.
My question to you is: what made you conclude that this is true?
I would argue that, because you are using IEnumrable<T> instead of IQueryable<T>, it's true that the query being performed in the database will be a simple:
select * from logs
... without any predicates, unlike what would have happened if you had used IQueryable<T> to invoke Where and Take.
However, the AsEnumerable() method call does not fetch all the rows at that moment, as other answers have implied. In fact, this is the implementation of the AsEnumerable() call:
public static IEnumerable<TSource> AsEnumerable<TSource>(this IEnumerable<TSource> source)
{
return source;
}
There is no fetching going on there. In fact, even the calls to IEnumerable<T>.Where() and IEnumerable<T>.Take() don't actually start fetching any rows at that moment. They simply setup wrapping IEnumerables that will filter results as they are iterated on. The fetching and iterating of the results really only begins when ToList() is called.
So when you say:
Couldn't EF do on demand loading of rows like you can with plain ADO.NET using Read() method of SqlDataReader and save time and bandwidth?
... again, my question to you would be: doesn't it do that already?
If your table had 1,000,000 rows, I would still expect your code snippet to only fetch up to 100 rows that satisfy your Where condition, and then stop fetching rows.
To prove the point, try running the following little program:
static void Main(string[] args)
{
var list = PretendImAOneMillionRecordTable().Where(i => i < 500).Take(10).ToList();
}
private static IEnumerable<int> PretendImAOneMillionRecordTable()
{
for (int i = 0; i < 1000000; i++)
{
Console.WriteLine("fetching {0}", i);
yield return i;
}
}
... when I run it, I only get the following 10 lines of output:
fetching 0
fetching 1
fetching 2
fetching 3
fetching 4
fetching 5
fetching 6
fetching 7
fetching 8
fetching 9
It doesn't iterate through the whole set of 1,000,000 "rows" even though I am chaining Where() and Take() calls on IEnumerable<T>.
Now, you do have to keep in mind that, for your little EF code snippet, if you test it using a very small table, it may actually fetch all the rows at once, if all the rows fit within the value for SqlConnection.PacketSize. This is normal. Every time SqlDataReader.Read() is called, it never only fetches a single row at a time. To reduce the amount of network call roundtrips, it will always try to fetch a batch of rows at a time. I wonder if this is what you observed, and this mislead you into thinking that AsEnumerable() was causing all rows to be fetched from the table.
Even though you will find that your example doesn't perform nearly as bad as you thought, this would not be a reason not to use IQueryable. Using IQueryable to construct more complex database queries will almost always provide better performance, because you can then benefit from database indexes, etc to fetch results more efficiently.
AsEnumerable() eagerly loads the DbSet<T> Logs
You probably want something like
context.Logs.Where(x => x.Id % 2 == 0).AsEnumerable();
The idea here is that you're applying a predicate filter to the collection before actually loading it from the database.
An impressive subset of the world of LINQ is supported by EF. It will translate your beautiful LINQ queries into SQL expressions behind the scenes.
I have come across this before.
The context command is not executed until a linq function is called, because you have done
context.Logs.AsEnumerable()
it has assumed you have finished with the query and therefore compiled it and returns all rows.
If you changed this to:
context.Logs.Where(x => x.Id % 2 == 0).AsEnumerable()
It would compile a SQL statement that would get only the rows where the id is modular 2.
Similarly if you did
context.Logs.Where(x => x.Id % 2 == 0).Take(100).ToList();
that would create a statement that would get the top 100...
I hope that helps.
LinQ to Entities has a store expression formed by all the Linq methods before It goes to an enumeration.
When you use AsEnumerable() and then Where() like this:
context.Logs.Where(...).AsEnumerable()
The Where() knows that the previous chain call has a store expression so he appends his predicate to It for lazy loading.
The overload of Where that is being called is different if you call this:
context.Logs.AsEnumerable().Where(...)
Here the Where() only knows that his previous method is an enumeration (it could be any kind of "enumerable" collection) and the only way that he can apply his condition is iterating over the collection with the IEnumerable implementation of the DbSet class, which must to retrieve the records from the database first.
I don't think you should ever use this:
context.Logs.AsEnumerable().Where(x => x.Id % 2 == 0).Take(100).ToList();
The correct way of doing things would be:
context.Logs.AsQueryable().Where(x => x.Id % 2 == 0).Take(100).ToList();
Answer with explanations here:
What's the difference(s) between .ToList(), .AsEnumerable(), AsQueryable()?
Why use AsQueryable() instead of List()?

Using LINQ Expression Instead of NHIbernate.Criterion

If I were to select some rows based on certain criteria I can use ICriterion object in NHibernate.Criterion, such as this:
public List<T> GetByCriteria()
{
SimpleExpression newJobCriterion =
NHibernate.Criterion.Expression.Eq("LkpStatu", statusObject);
ICriteria criteria = Session.GetISession().CreateCriteria(typeof(T)).SetMaxResults(maxResults);
criteria.Add(newJobCriterion );
return criteria.List<T>();
}
Or I can use LINQ's where clause to filter what I want:
public List<T> GetByCriteria_LINQ()
{
ICriteria criteria = Session.GetISession().CreateCriteria(typeof(T)).SetMaxResults(maxResults);
return criteria.Where(item=>item.LkpStatu=statusObject).ToList();
}
I would prefer the second one, of course. Because
It gives me strong typing
I don't need to learn yet-another-syntax in the form of NHibernate
The issue is is there any performance advantage of the first one over the second one? From what I know, the first one will create SQL queries, so it will filter the data before pass into the memory. Is this kind of performance saving big enough to justify its use?
As usual it depends. First note that in your second snippet there is .List() missing right after return criteria And also note that you won't get the same results on both examples. The first one does where and then return top maxResults, the second one however first selects top maxResults and then does where.
If your expected result set is relatively small and you are likely to use some of the results in lazy loads then it's actually better to take the second approach. Because all entities loaded through a session will stay in its first level cache.
Usually however you don't do it this way and use the first approach.
Perhaps you wanted to use NHibernate.Linq (located in Contrib project ). Which does linq translation to Criteria for you.
I combine the two and made this:
var crit = _session.CreateCriteria(typeof (T)).SetMaxResults(100);
return (from x in _session.Linq<T>(crit) where x.field == <something> select x).ToList();

Resources