How do I run LOAD CSV using spring-boot-starter-data-neo4j? - spring

I'm trying to run LOAD CSV to load a file into neo4j using spring-boot-starter-data-neo4j 2.7.4. Since spring data neo4j gives me access to neo4j through the Neo4jRepository interface I can only submit queries and not call LOAD CSV. Is it possible to call LOAD CSV using spring data? Is there a way to execute raw cypher through spring data? If not how can I submit a LOAD CSV command directly to neo4j using Java?

public interface MyRepository extends Neo4jRepository<SomeObject,Long>{
#Query("LOAD CSV FROM 'https://data.neo4j.com/bands/artists.csv' AS line\n"
+ "CREATE (:Artist {name: line[1], year: toInteger(line[2])})")
void loadCsv();
}

Related

How to create and use Ingest pipeline with Spring data elastic search

I need to upload pdf file in elastic search for searching content inside the pdf file. I used ingest pipeline curl APIs through postman it works fine but, i am unable integrate and use in my spring boot project to index and search on pdf file. Can anyone suggest me how to create and use ingest pipeline in spring data elastic search.
for index and document indexing we just annotated on entity class but for ingest pipeline how we use it.
#Document(indexName = "blog", type = "article")
public class Article {
#Id
private String id;
private String title;
#Field(type = FieldType.Nested, includeInParent = true)
private List<Author> authors;
// standard getters and setters
}
I need clarity in spring boot perspective how to configuring ingest pipeline and how to use it in entity class to save the file data to search.
This is not supported in Spring Data Elasticsearch.
And please note that Spring Boot and Spring Data Elasticsearch are different things. Spring Boot can autoconfigure Spring Data Elasticsearch, but that's it

Is there a way to retrieve data from redis in my jsp pages?

I am creating a spring boot server, and I would like the initial page (index.jsp) to be a datatable with information coming from the redis database. Anyone know how to do this with javascript or inside jsp pages?
First of all JSPs are outdated technology so I recommend you to go for thymeleaf which is a perfect choice with spring boot instead of JSP.
JAVA approach :
Just create a mapping with / base url and fetch Redis data using normal approach and use it in UI.
GetMapping("/")
public String index(){
//Fetch Redis data here add into model attributes
return "index";
}
JavaScript/inside JSP approach :
You can create a mapping method with some url like above and make an AJAX call on page load from the JSP page and get the JSON format data and use it.
Don't forget to put #ResponseBody annotation on top of the method which will return the data(JSON) instead of the page.
Refer call ajax to spring controller

Spring Data MongoDB eliminate POJO's

My system is a dynamic telemetry system. We have hundreds of different spiders all sending telemetry back to the SpringBoot server, Everything is dynamic, driven by json files in Mongo, including the UI. We don't build the UI, as opposed to individual teams can configure their own UI for their needs, all by editing json docs.
We have the majority of the UI running and i began the middleware piece. We are using Spring Boot for the first time along with Spring Data Mongo with several MQ listeners for events. The problem is Spring Data. I started reading the docs on it and I realized the docs do not address using it without POJO's. I have this wonderfully dynamic model that changes per user per minute if the telemetry spiders dictate, I couldn't shackle this to a POJO if I tried. Is there a way to use Spring Data with a Map?
It seems from my experiments that the big issue is there is no way to tell the CRUD routines of the repository class what collection to query without a POJO.
Are my suspicions correct in that this won't work and am I better off ditching Spring Data and using the Mongo driver directly?
I don't think you can do it without a pojo when using spring-data. The least you could do is this
public interface NoPojoRepository extends MongoRepository<DummyPojo, String> {
}
and create a dummy pojo with just id and a Map.
#Data
public class DummyPojo {
#Id
private String id;
private Map<String, Object> value;
}
Since this value field is a map, you can store pretty much anything.

Spring batch - hibernate integration

i'm working on a batch project that uses the spring batch core library
the library uses jdbcTemplate to persist jobs meta data
and i'm trying to use hibernate in order to read the data about a specific job
package com.ben.batch.repository;
import org.springframework.batch.core.JobInstance;
import org.springframework.data.jpa.repository.JpaRepository;
import org.springframework.data.jpa.repository.Query;
public interface JobInstanceRepository extends JpaRepository<JobInstance,Long> {
#Query("select count(j) from JobInstance j where j.jobName in :jobName ") //Can't resolve symbol 'JobInstance'
Long countBuJobName(String jobName);
}
in ordinary spring boot project this works but now it shows this error
Can't resolve symbol 'JobInstance'
tho I imported the class correctly
Any idea would be much appreciated.
For similar purposes exists JobRepository
http://docs.spring.io/spring-batch/apidocs/org/springframework/batch/core/repository/JobRepository.html
It allows you to fetching any information regarding your jobs.
the spring batch infrastructure is not yet available as spring-data repository, see this JIRA Ticket BATCH-2203
JobInstance is not a Hibernate entity (source code for reference). You'll need to implement your own Hibernate persistence layer if you'd like to query the tables using Hibernate. The primary reason for this is that the framework allows you to define any table prefix you like so your tables would end up as BATCH_JOB_EXECUTION, NIGHTLY_JOB_EXECUTION, ABCD_JOB_EXECTION, etc. Because of that the Hibernate model wouldn't know what table names to point to.

How to use batch-import of neo4j

I'm totally new of neo4j and maven.
I'm using a batch-import tool to convert my mysql to neo4j.
After compiling and converting from .csv, a target file containing graph.db is generated.
Does it mean that I have successfully converted my .csv into neo4j?
What should I do now?
Hope there is someone who can help me.
You have a couple of options to inspect your converted data.
Java REST server
Download and install the Neo4j server. Connect to it via http://localhost:7474. Launch a cypher query to inspect your data. E.g. START n=node(*) RETURN n;.
Java embedded server
You could write a small java application that connects to your data directory. E.g.
public static void main(String[] args) throws Exception {
final GraphDatabaseService graphDb = new GraphDatabaseFactory().newEmbeddedDatabase("db");
final ExecutionEngine engine = new ExecutionEngine(graphDb);
System.out.println(engine.execute("START n=node(*) RETURN n").dumpToString());
graphDb.shutdown();
}

Resources