Comparing temporal types using Memgraph MAGE - memgraphdb

I have a Memgraph 2.0.0 running as Memgraph MAGE inside Docker. I have a query and I want to sort the results using properties with temporal types.
My query is:
CREATE({a: DATE('2023-01-15')}), ({a: DATE('2023-01-05')}), ({a: DATE('2023-02-01')})
MATCH (n) RETURN n.a ORDER BY n.a
This is the error that I can see in my log: [2023-02-14 10:05:17.354] [memgraph_log] [critical] Unhandled comparison for types

Try to upgrade Memgraph to 2.1.0. Be on the lookout for breaking changes. This bug was fixed in Memgraph 2.1.

Related

How to Fix Document Not Found errors with find

I have a collection of Person, stored in a legacy mongodb server (2.4) and accessed with the mongoid gem via the ruby mongodb driver.
If I perform a
Person.where(email: 'some.existing.email#server.tld').first
I get a result (let's assume I store the id in a variable called "the_very_same_id_obtained_above")
If I perform a
Person.find(the_very_same_id_obtained_above)
I got a
Mongoid::Errors::DocumentNotFound
exception
If I use the javascript syntax to perform the query, the result is found
Person.where("this._id == #{the_very_same_id_obtained_above}").first # this works!
I'm currently trying to migrate the data to a newever version. Currently mongodbrestore-ing on amazon documentdb to make tests (mongodb 3.6 compatible) and the issue remains.
One thing I noticed is that those object ids are peculiar:
5ce24b1169902e72c9739ff6 this works anyway
59de48f53137ec054b000004 this requires the trick
The small number of zeroes toward the end of the id seems to be highly correlated with the problem (I have no idea of the reason).
That's the default:
# Raise an error when performing a #find and the document is not found.
# (default: true)
raise_not_found_error: true
Source: https://docs.mongodb.com/mongoid/current/tutorials/mongoid-configuration/#anatomy-of-a-mongoid-config
If this doesn't answer your question, it's very likely the find method is overridden somewhere in your code!

logstash 5 ruby filter

I've recently upgraded an elk-stack cluster from an old version to 5.1 and although everything looks great, I have an exception occurring frequently in the logstash log, which looks like this:
logstash.filters.ruby Ruby exception occurred: Direct event field references (i.e. event['field']) have been disabled in favor of using event get and set methods (e.g. event.get('field')). Please consult the Logstash 5.0 breaking changes documentation for more details.
The filter I have looks like this:
filter {
ruby {
init => "require 'time'"
code => "event.cancel if event['#timestamp'] < Time.now-(4*86400)"
}
}
Any suggestions ?
The exception contains the answer:
Direct event field references (i.e. event['field']) have been disabled in favor of using event get and set methods (e.g. event.get('field')).
From that, it seems like event.get('#timestamp') is now preferred over event['#timestamp'].

Elasticsearch 1.1 IO Exception

I am using elasticsearch 1.1.2, but there are not so much examples for this version, so, I decided to make such first simple code:
http://pastebin.com/eVyL1mRr
But the result was like this (it was in wrong encoding):
java.io.IOException: ╙фрыхээ√щ їюёЄ яЁшэєфшЄхы№эю ЁрчюЁтры ёє∙хёЄтє■∙хх яюфъы■ўх
эшх
at sun.nio.ch.SocketDispatcher.read0(Native Method).......... at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioW
orker.java:64)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.p
rocess(AbstractNioWorker.java:108)

mysql equivalent of limti x,y of mongodb through ruby on rails application

I want to select y documents starting from x through ruby on rails application.There is a first(n) method in ruby but that obviously wont serve the purpose.I want a ruby equivalent of the following mongodb query
db.users.find().limit(5).skip(10)
Does there exist any way to do this.
Check out the topic Active Record Query Interface - Limit and Offset:
Users.limit(5).offset(10)

debugging elasticsearch

I'm using tire and elasticsearch. The service has started using port 9200. However, it was returning 2 errors:
"org.elasticsearch.search.SearchParseException: [countries][0]: from[-1],size[-1]: Parse Failure [Failed to parse source [{"query":{"query_string":{"query":"name:"}}}]]"
and
"Caused by: org.apache.lucene.queryParser.ParseException: Cannot parse 'name:': Encountered "<EOF>" at line 1, column 5."
So, I reinstalled elasticsearch and the service container. Service starts fine.
Now, when I search using tire I get no results when results should appear and I don't receive any error messages.
Does anybody have any idea how I might find out what is wrong, let alone fix it?
first of all, you don't need to reindex anything, in the usual cases. It depends how you installed and configured elasticsearch, but when you install and upgrade eg. with Homebrew, the data are persisted safely.
Second, no need to reinstall anything. The error you're seeing means just what it says on the tin: SearchParseException, ie. your query is invalid:
{"query":{"query_string":{"query":"name:"}}}
Notice that you didn't pass any query string for the name qualifier. You have to pass something, eg:
{"query":{"query_string":{"query":"name:foo"}}}
or, in Ruby terms:
Tire.index('test') { query { string "name:hey" } }
See this update to the Railscasts episode on Tire for an example how to catch errors due to incorrect Lucene queries.

Resources