I am building an application with the spring boot framework. I can easily use spring data to query data from db, but is there anyway I can send the merge query like below, instead of first creating thousands of object and save them like this post.
USING PERIODIC COMMIT
LOAD CSV WITH HEADERS FROM
'file:///existed_file.csv' as line
WITH line
MERGE (n:Item {itemId: line.id)})
You can use the #Query to on a repository, or use the OGM session.query to directly execute your Cypher statement.
However, SDN is not suited for data import in general (see When not to use SDN).
Related
I have a database and in that database there are many tables of data. I want to fetch the data from any one of those tables by entering a query from the front-end application. I'm not doing any manipulation to the data, doing just retrieving the data from database.
Also, mapping the data requires writing so many entity or POJO classes, so I don't want to map the data to any object. How can I achieve this?
In this case, assuming the mapping of tables if not relevant, you don't need to use JPA/Hibernate at all.
You can use an old, battle tested jdbc template that can execute a query of your choice (that you'll pass from client), will serialize the response to JSONObject and return it as a response in your controller.
The client side will be responsible to rendering the result.
You might also query the database metadata to obtain the information about column names, types, etc. so that the client side will also get this information and will be able to show the results in a more convenient / "advanced" way.
Beware of security implications, though. Basically it means that the client will be able to delete all the records from the database by a simple query and you won't be able to avoid it :)
I'm trying to filter data using query by example. It's working for entities and primitives, but not for projections. Do You know if such functionality available in Spring Data Jpa?
At this moment it's not implemented in Spring Data Jpa 2.1.0.RC2, but as workaround it's possible to use extension like specification-with-projection
I have to use SOQL queries within a Spring Data Repository, Is there a way to do so by using #Query annotation ? If not is there a alternative way ?
As far as I know salesforce doesn't expose the table structures. Rather they expose their objects and you can write queries on them. spring-data-jpa is used on top of an entity framework like hibernate. Unless you have entity objects mapped to actual database tables, spring-data-jpa is not useful.
The best way would be to use a query builder like jooq and construct SOQL queries easily using query builders.
I'm looking for a solution with Spring / camel to consume multiple REST services during runtime and create tables to store the data from REST API and compare the data dynamically. I don't know the schema for JSON API in advance to generate the JAVA client classes to create JPA persistent entity classes during run time.
You'll need to think through this differently. Id forget about Java class POJOs that you don't have and can't create since the class structure isn't known in advance. So anything with POJO->Entity binding would be pretty useless.
One solution is to simply parse the xml or json body manually with en event-based parser (like SAX for XML) and simply build an SQL create string as you go through the document. Your field and table names would correspond to the tags in the document. Without access to an XSD or other structure description, no meta data is available for field lengths or types. Make everything really long VARCHAR? Also perhaps an XML or other kind of database might suite your problem domain better. In any case, you could include such a thing right in your Camel route as a Processor that will process the body and create the necessary tables if they don't already exist. You could even alter a table for lengths in the process when you have a field value that is longer than what's currently defined.
I am using Spring batch 2.1.9.RELEASE
I need to configure a job-step which reads the data from Mysql DB, process it and write back to Mysql. I want to do it in chunks.
I considered using JdbcCursorItemReader but the SQL is a complex one. I need to fetch data from three other tables to create the actual SQL to use in the reader.
But if I use a customItemReader with JdbcTemplate/NamedParameterJdbcTemplate, how can i make sure the step processes the data in chunk? I am not using JPA/DAOs.
Many thanks,
In Spring-batch data are normally processed as chunk; the easy way is to declare a commit-interval in step definition; see Configuring a step.
Another way to define a custom chunk policy is to implements your own CompletionPolicy.
To answer your question use the Driving Query Based ItemReaders to read from main table and build complex object (reading from other tables), define a commit-interval and use the standard read/process/write step pattern.
I hope I was clear, English is not my language.