How .graphqls files are generated in Spring Boot - spring-boot

My ongoing project uses Spring Boot, GraphQL and Relay. In resources/graphql/type are files ending with . graphqls. I am trying to write a query to fetch some data, but I am unable to access it on Postman. Do I need to create a .graphqls file for my query or are these files autogenerated? If they are autogenerated, which command do I need to run?

Related

How to read file generated dynamically in spring boot

I have Spring boot application, in which I am generating file dynamically and want flyway plugin to read file at the same time.
I am able to generate file but not able to find the file at the same run by flyway plugin. If I refresh app it shows that file and able to read at next run.
I want that generates and at same time able to read.
spring.devtools.restart.additional-paths=
tried this but it restarts application and write into same file only. because of that records are duplicated and also it not refresh resource dir so not able to read.
I tried many other ways but couldn't work for me. Is there any solution.

Get all config files with spring cloud config

I am relatively new to spring cloud config and I am able to pull the config file with the application name. There is a need for me to pull all the config files from Git via Spring Cloud Config without having to know or pass the application name in the context.
Currently I have used "spring.cloud.config.server.git.searchPaths= *" in my bootstrap file to search my entire git project and using "/{application}-{profile}.properties" to pull the properties files matching the application file. Would want to pull all properties files without passing this application name, is this possible?
I would require this logic to know how many properties files are checked in and how many are duplicates and this logic will be handled by my custom REST service that I am planning to write.

Spring Batch and ElasticSearch

Here is my scenario..
I need to read csv file and store output to ElasticSearch. I am using Spring Batch to read csv file. can anyone give me example how to save in elasticsearch using Spring Batch or Spring Batch Extension?
Its an old question and probably you might have found answer by now but here it goes...
To work with ElasticSearch, you need Spring Data and you simply write items from your writer as you normally do but with a repository instance like - repository.save(list) where list is a List of items passed from Spring Batch processor to writer.
Where repository is basically a ElasticsearchRepository from Spring Data. You need to define repositories for your items.
You need to provide your ElasticsearchRepository definitions to ElasticSearch instance definitions by editing - #EnableElasticsearchRepositories in and you define persistent layer as done here. Edit #EnableElasticsearchRepositories to actual repository package location of your project.
Hope it helps !!
Actually, I worked with a similar project but instead of importing data from a CSV file, I imported it from a relational database MySQL, reading and filtering data with the spring batch and write it into elasticsearch , this is the link of the project in the GitHub read carefully the readme.md file you will find all the required configuration :
the github project link

need to use jhipster generated entities, etc from standalone application

Using Jhipster I have successfully configured and running angularjs application fine from front end. I have created many custom entities also successfully. Now, in the project I want to have a load.java file and make use of those created entities to load the data from csv files to those entity tables. I mean with out using front end (Angulars), I should be able to use to all the created entities and crud operations from load.java, is it possible to do it? If yes, any sample code reference would be helpful, i did not find any documentation on this part on the website.
CSV loading in JHipster is done through Liquibase at database initialisation.
if you want to load CSV at any time you'll have to code it yourself using the repositories generated by JHipster and this is pure Spring data/JPA question not JHipster specific.

Fetch file as byte array in spring integration

I'm new to spring integration. I need to fetch some file via sftp and immediately start some processing on the content of that file. There is SFTP Inbound Channel Adapter that partially satisfy me. But it saves(as documentation says) fetched file in local directory. I have no possibility to save it on local machine, but just want to start processing the content of that file, so it will be good for me to retrieve remote file as byte array or as InputStream. How can I achieve this with spring integration?
Also I want to configure my system to periodically fetch that file. I know that I can configure spring bean with #Scheduled annotation on some method and start processing from that method. But, maybe, spring integration has more elegant solution for such case?
Spring Integration 3.0.1 has a new RemoteFileTemplate that you can use to programmatically receive a file as a stream. See the Javadocs.

Resources