I've got some data that has been pumped into a neo4j instance using the native api. The same instance is used by an app backed by Spring data graph. The repositories fail to find the data. I'm assuming that this is an issue due to indexes and/or missing properties.
When the data is pumped in the following properties are set:
node.setProperty("__type__", "com.x.x.Class");
Index is set as follows:
Index<Node> typeIndex = indexManager.forNodes("__types__");
typeIndex.add(node, "className", "com.x.x.Class");
Any clues/help is appreciated.
imamc,
I'd appreciate it if you posted a simple test that reproduces the problem, preferably to https://groups.google.com/forum/?fromgroups#!forum/neo4j
But off hand, what you said makes sense, I do not have any other tips. But if we get some code/ test to work on, we might be able to help.
Lasse
Related
I have a huge sqlite file containing my db. I need to know if it is possible and how to connect to this db as an embedded one with jpa.
I'm developing an app that packs this database inside it's own jar so that when I use it on another system I don't have to import a copy of my db back and forth.
The technologies I'd like to use are Angular and Spring since those are the ones I know best. If there are some techonlogies that better suit this purpose I'd like some suggestions.
Thanks :)
I hope I undestood your question correctly, so I made a small project for you, hence you can have a look into it: spring-jpa-sqlite-sample. It may guide you a bit, though I and don't claim correctness or completeness.
The path to the sqlite file can easily be changed by inserting the correct url in the persistence.properties file:
driverClassName=org.sqlite.JDBC
url=jdbc:sqlite:src/main/resources/chinook.db --> you may use relative paths.
hibernate.dialect=dev.mutiny.semo.config.SQLiteDataTypesConfig
hibernate.hbm2ddl.auto=none
hibernate.show_sql=true
You can also use Environment variables from your system, which Spring tries to read from, so that you can reference the correct directory to a file. This can be found here: Read system environment var (SO)
Last but not least. Beware of using huge SQLite files. Find another way and transfer it first into a 'real' Database like any other Client/Server RDBMS you know (Oracle, MariaDB, MSSQL, depends on your scenario/taste).
Have closer look onto the documentation: When to use SQLite (and when not to!)
We are working with H2O version 3.22.0.1. We created a process in java 10 that communicates with the REST API utilizing jersey version 2.27 with gson 2.3.1. The process invokes ImportFiles, followed by ParseSetup and Parse. Everything works well up until that point. Then the process invokes 3/ModelBuilders/gbm/parameters. From examining the log, it appears that the H2O server responds as expected. However, gson throws a JsonSyntaxException caused by the following:
java.lang.IllegalStateException: Expected BEGIN_OBJECT but was BEGIN_ARRAY at line 1 column 4115 path $.parameters
Upon further analysis, it appears that the H2O server is providing a GBMV3 object with an array of ModelParameterSchemaV3 objects, while the GBMV3 class, as defined in the library that our client uses, extends SharedTreeV3, which extends ModelBuilderSchema, which has a single instance of ModelParametersSchemaV3. There is an apparent discrepancy between the way the GBMV3 object provided by the H2O server is composed, and the way the class is defined in the H2O library. One has an array of ModelParameterSchemaV3 objects, while the other has a single instance of ModelParametersSchemaV3. Is that the case? If so, could you please help us understand what we may be doing wrong, and how to correct it?
See the files located at: https://1drv.ms/f/s!AsSlPHvlhJI1hIpB2M5X49J5L-h1qw
Run the H2O server. Import the CSV file in H2O Flow. SetupParse and Parse the data. Run the test procedure. Thank you for your kind assistance.
Thanks for the detailed description. To better understand your problem - would you be able to provide a simplified example of how you are calling H2O-3 using the Java bindings?
You might be hitting a bug so if you are able to give us a reproducer we could expedite a fix for this issue.
How do I set any external database (mysql, postgres I'm not concerned with which one at this point) for usage with metadata?
At the moment I have spring batch writing the results of jobs to Mongodb and that works fine but I'm not keeping track of job status so the jobs are being run from the start every time even if interrupted halfway though.
There are plenty examples of how to avoid doing this, but can't seem to find a clear answer on what I need to configure to send the metadata somewhere real rather than in-memory.
I attempted adding a properties file but that had no effect
# for Postgres:
batch.jdbc.driver=org.postgresql.Driver
batch.jdbc.url=jdbc:postgresql://localhost/postgres
batch.jdbc.user=postgres
batch.jdbc.password=mysecretpassword
batch.database.incrementer.class=org.springframework.jdbc.support.incrementer.PostgreSQLSequenceMaxValueIncrementer
batch.schema.script=classpath:/org/springframework/batch/core/schema-postgresql.sql
batch.drop.script=classpath:/org/springframework/batch/core/schema-drop-postgresql.sql
batch.jdbc.testWhileIdle=false
batch.jdbc.validationQuery=
There are plenty examples of how to avoid doing this, but can't seem to find a clear answer on what I need to configure to send the metadata somewhere real rather than in-memory.
You need to configure a bean of type DataSource in your batch application context (or extend the DefaultBatchConfigurer and set the data source you want to use to store meta-data).
There are many samples here: https://github.com/spring-projects/spring-batch/tree/master/spring-batch-samples
You can find the data source configuration here: https://github.com/spring-projects/spring-batch/blob/master/spring-batch-samples/src/main/resources/data-source-context.xml
I'm currently working on a POC with Couchbase, using Spring Data to put & get documents on/off a bucket on a cluster.
As I'm working in a big company, I'm lucky they gave me a bucket, but still I don't have the admin rights on the cluster, so I only have access to the bucket.
But as I'm digging into the Spring Data documentation, I'm not able to find a way to retrieve documents without creating views on the server. (I'm getting errors like "Unknown query param" ). Nevertheless with couchbase java sdk i'm able to, through n1ql queries, but the use of the Spring data layer is mandatory.
The answers I found always point me to the server-side function direction
ex : https://stackoverflow.com/a/30928169/3744307
What I would like to find, is a way to add a repository method like
List findReceiptByAccount(String Account)
without having to specificly declare the function server-side.
Is this possible, or have I to send a request to the administrators to create functions for me everytime I have to add a findByX method?
Thanks for your time,
What version of CB is it ?
I think that prior to 4.5, a n1ql access (which you seems to have) is enough to build your index yourself !
With Spring Data Couchbase 2.x that would use a N1QL index in the background, and it would work with a single primary index (although having 1 index per repository entity class would be best for performance). Maybe you can ask your admin to create that index once?
I am trying to created a Thinkaurelius titan datastore using:
TitanGraph graph = TitanFactory.open("/tmp/graph")
The documentation can be found at https://github.com/thinkaurelius/titan/wiki/Using-BerkeleyDB
But each time I open the graph a new datastore is being created. I even tryed using the configure object but it did not help. Has any one worked on this before? I wanto create a titan datastore that should be reusable, i.e. it should not create a new datastore each time I open it.
Any suggestions please?
It sounds like the changes aren't being committed to the database. Look more into how transactions work.
https://github.com/thinkaurelius/titan/wiki/Transaction-Handling