Inconsistent handling property values in JanusGraph - janusgraph

Getting some problem while playing with JanusGraph. I am new in JanusGraph recently I have installed and followed the documentation for add vertex with its properties and few more but when I am trying to insert a property of float digit and set the key as "abc" and value would be 9.5f in this case I got an error but in the same query when I change the key to "a" or something else it works fine.
Example with key abc
g.addV("T22").property("abc", 9.5f)
Error
Value [9.5] is not an instance of the expected data type for property key [abc] and cannot be converted. Expected: class java.lang.Integer, found: class java.lang.Float
Example with key a
g.addV("T22").property("a", 9.5f) working fine
g.V(163848208).valueMap()
{a=[10.5]}
updated
Got the same error again the error occurred only with 2 properties keys
abc
Mailing_Code

By default, JanusGraph uses an automatic schema maker. When a new property key is used, it will define a property key with its data type based on the value. In your scenario, it sounds like the first usage of abc used Integer rather than Float. Here's an example recreating your scenario:
gremlin> JanusGraph.version()
==>0.2.0
gremlin> graph = JanusGraphFactory.open('inmemory')
==>standardjanusgraph[inmemory:[127.0.0.1]]
gremlin> g = graph.traversal()
==>graphtraversalsource[standardjanusgraph[inmemory:[127.0.0.1]], standard]
gremlin> g.addV("T22").property("abc", 9).iterate()
gremlin> g.tx().commit()
==>null
gremlin> g.addV("T22").property("abc", 9.5f).iterate()
Value [9.5] is not an instance of the expected data type for property key [abc] and cannot be converted. Expected: class java.lang.Integer, found: class java.lang.Float
After a property key is defined, its data type cannot be changed. As described in the docs:
It is strongly encouraged to explicitly define all schema elements and to disable automatic schema creation by setting schema.default=none in the JanusGraph graph configuration.
Doing so will give you better control over the schema that is created. Here's an example on how to do that:
gremlin> graph = JanusGraphFactory.build().
......1> set('storage.backend', 'inmemory').
......2> set('schema.default', 'none').
......3> open()
==>standardjanusgraph[inmemory:[127.0.0.1]]
gremlin> mgmt = graph.openManagement()
==>org.janusgraph.graphdb.database.management.ManagementSystem#46aa712c
gremlin> mgmt.makeVertexLabel('T22').make()
==>T22
gremlin> mgmt.makePropertyKey('abc').dataType(Float.class).make()
==>abc
gremlin> mgmt.commit()
==>null
gremlin> g = graph.traversal()
==>graphtraversalsource[standardjanusgraph[inmemory:[127.0.0.1]], standard]
gremlin> g.addV('T22').property('abc', 9).iterate()
gremlin> g.tx().commit()
==>null
gremlin> g.addV('T22').property('abc', 9.5f).iterate()
gremlin> g.tx().commit()
==>null
gremlin> g.V().values('abc').map{ [ it.get(), it.get().getClass().getName() ] }
11:29:34 WARN org.janusgraph.graphdb.transaction.StandardJanusGraphTx - Query requires iterating over all vertices [()]. For better performance, use indexes
==>[9.5,java.lang.Float]
==>[9.0,java.lang.Float]
Here is a link to the documentation on JanusGraph schema which has more information.

Related

How can I index properties in Memgraph?

How can I index properties in the Node and Relationship classes I created in Memgraph with GQLAlchemy? Is it possible to create only a Label index?
Check out Memgraph's documentation. There are some great and simple examples that explain the process. For example,
from gqlalchemy import Memgraph, Node, Field
db = Memgraph()
class Animal(Node, index=True, db=db):
name: str
class Human(Node):
id: str = Field(index=True, db=db)
In the first class, Animal, the class argument index is set to True. That means that Memgraph will create a label index on the label Animal.
The other class, Human, has a Field() index argument set to True. Hence, Memgraph will create a label-property index on the property id of every node labeled Human.

How to get spring neo4j cypher custom query to populate an array of child relationships

Built-in queries to Spring Data Neo4j (SDN) return objects populated with depth 1 by default. This means that "children" (related nodes) of an object returned by a query are populated. That's good - there are actual objects on the end of references from objects returned by these queries.
Custom queries are depth 0 by default. This is a hassle.
In this answer, it is described how to get springboot neo4j to populate a related element to the target of a custom query - to achieve an extra one level of depth of results from the query.
I am having trouble with this method when the related elements are in a list:
#NodeEntity
public class BoardPosition {
#Relationship(type="PARENT", direction = Relationship.INCOMING)
public List<BoardPosition> children;
I have a query returning a target BoardPosition and I need it's children to be populated.
#Query("MATCH (target:BoardPosition) <-[c:PARENT]- (child:BoardPosition)
WHERE target.play={Play}
RETURN target, c, child")
BoardPosition findActiveByPlay(#Param("Play") String play);
The problem is that the query appears to return one separate result for each child, and those results aren't being used to populate the array of children in the target.
Instead of Spring Neo collating the children into the array on the target, I get "only 1 result expected" error - as if the query is returning multiple results each with one child, rather than one result with the children in it.
org.springframework.dao.IncorrectResultSizeDataAccessException:
Incorrect result size: expected at most 1
How can I have a custom query to populate that target's children list?
(Note that the built-in findByPlay(play) does what I want - the built-in queries have a depth of 1 rather than 0, and it returns a target with populated children - but of course I need to make the query a bit more sophisticated than just "by Play"... that's why I need to solve this)
Versions:
org.springframework.data:spring-data-neo4j:5.1.3.RELEASE
neo4j 3.5.0
=== Edit ======
Your problem arises because you have self-relationship (relationship between nodes of the same label)
This is how Spring treat your query for single node:
org.springframework.data.neo4j.repository.query.GraphQueryExecution
#Override
public Object execute(Query query, Class<?> type) {
Iterable<?> result;
....
Object ret = iterator.next();
if (iterator.hasNext()) {
throw new IncorrectResultSizeDataAccessException("Incorrect result size: expected at most 1", 1);
}
return ret;
}
Spring passes your node class type Class<?> type to neo4j-ogm and have your data read back.
You know, neo4j server will returns multiple rows for your query, one for each matching path:
A <- PARENT - B
A <- PARENT - C
A <- PARENT - D
If your nodes are of different labels, i.e. of different class type then the ogm only return single node correspond to your query return type, no problem.
But your nodes are of the same labels, i.e. same class type => Neo4j OGM cannot distinguish which is the returned node -> All nodes A, B, C, D returned -> Exception
Regard this issue, I think you should file a bug report now.
For workaround, you can can change the query to return only the distinct target.your_identity_property (identity_property is 'primary key' of the node, which uniquely identify your node)
Then have your application call load with the that identity property:
public interface BoardRepository extends CrudRepository<BoardPos, Long> {
#Query("MATCH (target:B) <-[c:PARENT]- (child:B) WHERE target.play={Play} RETURN DISTINCT target.your_identity_property")
Long findActiveByPlay(#Param("Play") String play);
BoardPos findByYourIdentityProperty(xxxx);
}
=== OLD ======
Spring docs says that (highlighted by me):
Custom queries do not support a custom depth. Additionally, #Query does not support mapping a path to domain entities, as such, a path should not be returned from a Cypher query. Instead, return nodes and relationships to have them mapped to domain entities.
So clearly your use-case (populate children nodes by custom query) is supported. Spring framework already maps the results into a single node. (Indeed, my setup on local turnouts that the operation is working properly)
So your exception may be caused by several issues:
You have more than one target:BoardPosition with target.play={play}. So the exception refers to more than one target:BoardPosition instead of one BoardPosition with multiple child result
You have incorrect entity mapping. Do you have your mapping field annotated with #Relationship with correct direction attribute? You might post your entity here.
Here is my local setup:
#NodeEntity(label = "C")
#Data
public class Child {
#Id
#GeneratedValue
private long id;
private String name;
#Relationship(type = "PARENT", direction = "INCOMING")
private List<Parent> parents;
}
public interface ChildRepository extends CrudRepository<Child, Long> {
#Query("MATCH (target:C) <-[p:PARENT]- (child:P) "
+ "WHERE target.name={name} "
+ "RETURN target, p, child")
Child findByName(#Param("name") String name);
}
(:C) <-[:PARENT] - (:P)
Consider the alternative query
MATCH (target:BoardPosition {play:{Play}})
RETURN target, [ (target)<-[c:PARENT]-(child:BoardPosition) | [c, child] ]
which is using list comprehension to return not only the target but also its relations and related nodes of label BoardPosition within one result row. This ensures that the result will be a single row (as long as your attribute play is unique).
I didn't try it with your example but in my application this approach is working fine. Neo4j OGM hydrates the objects as expected. It is important to include the related nodes as well as the relations pointing to the nodes.
If you enable neo4j OGM logs, you can see that the build-in queries with depth 1 use the same approach.

Hibernate Spatial PostGis PSQLException column is of type point but expression is of type bytea

In a Spring Boot project, Java8, with hibernate-spatial and PostgresDB 9.4
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-spatial</artifactId>
<version>5.2.10.Final</version>
</dependency>
application.properties
spring.datasource.driver-class-name=org.postgresql.Driver
spring.jpa.database-platform=org.hibernate.spatial.dialect.postgis.PostgisPG94Dialect
spring.jpa.properties.hibernate.dialect=org.hibernate.spatial.dialect.postgis.PostgisPG94Dialect
(I tried also PostgisPG9Dialect)
My Entity has a property
...
import com.vividsolutions.jts.geom.Point;
....
#Column(columnDefinition = "Point")
private Point cityLocation;
If I save with null value it's ok, but if I put a value
setCityLocation(new GeometryFactory().createPoint(new Coordinate(lng, lat));
I have:
PSQLException: ERROR: column "city_location" is of type point but expression is of type bytea You will need to rewrite or cast the expression.
In my db I can see the column definition as
type: point
column size: 2147483647
data type: 1111
num prec radix: 10
char octet length: 2147483647
I'M GOING CRAZY... Why It doesn't work?
UPDATE (It still don't work, I'm collecting new informations)
1) I'm thinking the problem could be the creation of the db.
In my application.properties I also have :
spring.jpa.properties.hibernate.hbm2ddl.auto=update
so the schema will update 'automatically' by hibernate.
2) I can run with success a query directly on the db (I use "Squirrel SQL" as client)
update my_table set city_location = POINT(-13,23) where id = 1
and if I
select city_location from my_table where id = 1
the answer is
<Other>
I can't see the value... I got the same answer for the record with null value inside the point type...
3) After set a value to the 'point' column with a query, I'm no more able to read from the table, I receive the exception:
org.geolatte.geom.codec.WktDecodeException : Wrong symbol at position: 1 in Wkt: (-13.0,23.0)
4) I look inside the hibernate-spatial-5.2.10.Final.jar and I found two "geolatte" named classes in the package org.hibernate.spatial :
GeolatteGeometryJavaTypeDescriptor.class
GeolatteGeometryType.class
5) And also (specific for Squirrel SQL client experts):
if I try to change a value of a column in "my_table" (not the 'point' city_location but anyone of the other columns) I recive an error similar to the one I recive in java when I try to insert a point value:
Exception seen during check on DB. Exception was:
ERROR: operator does not exist: point = character varying
Hint: No operator matches the given name and argument type(s). You might need to add explicit type casts.
Squirrel is made with java.. so I can accept this strange thing, may be it compose the query in a 'wrong' way, maybe it is connected to the value I see when I make a select...
Any ideas?
I found the solution!!
A fix to the code was needed and a magic trick I read in another stackoverflow question saved my life.
The problem was that the db column was created in a wrong way:
in the db the column type should be geometry NOT point
I removed the columnDefinition = "Point" from the #Column annotation and I ran the query
CREATE EXTENSION postgis;
on my db following these instructions:
Postgis installation: type "geometry" does not exist
Krishna Sapkota you are my new super hero!
Just remove columnDefinition = "POINT", from #Column annotation, and just use the Point object. (i.e. Use default column definition)

JCR query to return excerpt and parent identifier

I would like to query the jackrabbit repository on the versions I have stored.
My repository looks like the following:
Following xpath query works well://element(*, nt:frozenNode)[jcr:contains(., '" + keyword + "') ]/rep:excerpt(.) and from the Row object returned I can get the excerpt found in the de:template nodes 'de:content' property (for this to be full-text indexable I have my own lucene configuration).
The problem however is: how to know what elements excerpt is found for, since the query only returns me the path found (/jcr:system/jcr:versionStorage/95/c8/3e/95c83efc-8441-4017-b3af-ae7be49f07e5/1.0/jcr:frozenNode/de:template) and the excerpt itself.
So I would like to know the identifier of the nt:versionHistory node, as stored in Jackrabbit.
I have a solution for this as well, by getting the parent nodes until the nt:versionHistory is reached and getting its identifier:
Row row = (Row) rows.next();
Node node = row.getNode();
Node frozenNode = node.getParent();
Node versionNumber = frozenNode.getParent();
String versionId = versionNumber.getIdentifier();
However this takes too much time and with lots of versions its bad for the performance.
Therefore, I wonder if it's possible to include this version id in the query, such that no parent nodes need to be fetched after the query is executed.
That's probably not possible using an XPath Query. But you could do it using SQL2 using a join:
SELECT n.*, excerpt(n), v.[jcr:uuid]
FROM [nt:frozenNode] AS n
INNER JOIN [nt:version] AS v ON ISDESCENDANTNODE(n,v)
WHERE contains(n.*,'Adobe')

Delphi7 master detail relations query results in ORA-01036

I'm using Delphi7, Devart's dbExpress driver 4.70.
I drop two TSQLTables (call them A and B), two TDataSetProviders (dspA and dspB), two TClientDataSets (cdsA and cdsB), two TDataSources (dsA and dsB) and two DBGrids (gridA and gridB). Everything is set fine. If I set cdsA.Active to true I can see the data in gridA. The same per cdsB.
Now I want to implement the relation
A JOIN B ON a = b.
The field a is the true A's foreing key referred by B's field b and b is B's primary key too. I set the stuff as follow (I use graphic tools):
cdsB.MasterSource := dsA;
cdsB.MasterFields := a;
cdsB.IndexFieldNames := b;
When I do cdsB.Open, I got this error:
ORA-01036: illegal variable name/number".
The field a value is always null in table A (there is no data). TSQLMonitor reports the following queries:
Execute: select * from A
...
Execute: select * from ENTI where (b is NULL)
:1 (Number,IN) = <NULL>
What did I miss, and how can this be fixed?
When using Datasnap, you should set the M/D relationship on the source datasets, not the client ones. It will create a "dataset field" in the master client dataset. You then assign this field to the child client dataset. This approach is also more perfomant.
Anyway it should work as well, it looks there is something wrong with your SQL.

Resources