How to generate multiple files with one controller in spring boot - spring-boot

Hi I have created a spring boot application to generate csv files that fetch data from database and write them into a csv file. Also I have added the functionality to select the columns that we need from the database to be included in the csv. Now I need to download multiple files with different columns from the database. I tried to repeat the code to generate csv in the same class but it simply adds the content required in the second file in the first file.Please let me know what can I do. Example if there are 4 columns:-id, amount, currency,name then id amount in 1 file name and currency in another.following is my code
Controller:
public void exportCSV(#RequestParam(name="cohort") String cohort ,HttpServletResponse response) throws Exception {
// set file name and content type
String filename = "liabilities.csv";
response.setContentType("text/csv");
response.setHeader(HttpHeaders.CONTENT_DISPOSITION,
"attachment; filename=\"" + filename + "\"");
// Configure the CSV writer builder
StatefulBeanToCsvBuilder<Report> builder = new StatefulBeanToCsvBuilder<Report>(response.getWriter()).withQuotechar(CSVWriter.NO_QUOTE_CHARACTER).withSeparator(CSVWriter.DEFAULT_SEPARATOR).withOrderedResults(true);
// Ignore any field except the `id` and `amount` ones
Arrays.stream(Report.class.getDeclaredFields())
.filter(field -> !("id".equals(field.getName()) || "amount".equals(field.getName())))
.forEach(field -> builder.withIgnoreField(Report.class, field));
// create a csv writer
StatefulBeanToCsv<Report> writer = builder.build();
// write all employees to csv file
writer.write(reportsService.findByCohort(cohort));
}
}
The above code generates csv file with Id and amount entries from the database now what should i do to get currency and name in one csv and download it simultaneously.

Related

Creation Date PDF Google Drive

I want to change the creation date on a Google Drive PDF file. So, how can I achieve this?
public static File SetLastModified(string fileID, DateTime lastModified)
{
File file = DriveService.Files.Get(fileID).Fetch();
file.ModifiedDate = lastModified.ToString("yyyy'-'MM'-'dd'T'HH':'mm':'ss.fff'Z'");
try
{
FilesResource.UpdateRequest request = DriveService.Files.Update(file, fileID);
request.SetModifiedDate = true;
file = request.Fetch();
}
catch (Exception e)
{
throw;
}
return file;
}
It is not possible to change the creation date of a file using Files:update() method
If you will check the request body for Files:update() , you can only modify the parameters listed in the table provided in the document. createdTime property is not included in the list.
The one being modified in your sample code is the modifiedTime property which is included in the list of editable properties.
If you want to have a file with a specified createdTime, you need to create a new file using Files:create(). createdTime can be modified as part of the request body
Additional Reference:
Drive API Quickstart - Setup Drive API for different platforms
Drive API - How to create Files

How to read flat file header and body separately in Spring Batch

i'm doing a simple batch job with Spring Batch and Spring Boot.
I need to read a flat file, separate the header data (first line) from the body data (rest of lines) for individual business logic processing and then write everything into a single file.
As you can see, the header has 5 params that have to be mapped to one class, and the body has 12 which have to be mapped to a different one.
I first thought of using FlatFileItemReader and skip the header. Then use the skippedLinesCallback to handle that line, but i couldn't figure out how to do it.
I'm new to Spring Batch and Java Config. If someone can help me writing a solution for my problem i would really aprecciate it!
I leave here the input file:
01.01.2017|SUBDCOBR|12:21:23|01/12/2016|31/12/2016
01.01.2017|12345678231234|0002342434|BORGIA RUBEN|27-32548987-9|FA|A|2062-
00010443/444/445|142,12|30/08/2017|142,01
01.01.2017|12345673201234|2342434|ALVAREZ ESTHER|27-32533987-9|FA|A|2062-
00010443/444/445|142,12|30/08/2017|142,02
01.01.2017|12345673201234|0002342434|LOPEZ LUCRECIA|27-32553387-9|FA|A|2062-
00010443/444/445|142,12|30/08/2017|142,12
01.01.2017|12345672301234|0002342434|SILVA JESUS|27-32558657-9|NC|A|2062-
00010443|142,12|30/08/2017|142,12
Cheers!
EDIT 1:
This would be my first attepmt . My "body" POJO is called DetalleFacturacion and my "header" POJO is CabeceraFacturacion. The reader I thought to do it with DetalleFacturacion pojo, so i can skip the header and treat it later... however i'm not sure how to assign header's data into CabeceraFacturacion.
public FlatFileItemReader<DetalleFacturacion> readerDetalleFacturacion(){
FlatFileItemReader<DetalleFacturacion> reader = new FlatFileItemReader<>();
reader.setLinesToSkip(1);
reader.setResource(new ClassPathResource("/inputFiles/GLEO-MN170100-PROCESO01-SUBDFACT-000001.txt"));
DefaultLineMapper<DetalleFacturacion> detalleLineMapper = new DefaultLineMapper<>();
DelimitedLineTokenizer tokenizerDet = new DelimitedLineTokenizer("|");
tokenizerDet.setNames(new String[] {"fechaEmision", "tipoDocumento", "letra", "nroComprobante",
"nroCliente", "razonSocial", "cuit", "montoNetoGP", "montoNetoG3",
"montoExento", "impuestos", "montoTotal"});
LineCallbackHandler skippedLineCallback = new LineCallbackHandler() {
#Override
public void handleLine(String line) {
String[] headerSeparado = line.split("|");
String printDate = headerSeparado[0];
String reportIdentifier = headerSeparado[1];
String tituloReporte = headerSeparado[2];
String fechaDesde = headerSeparado[3];
String fechaHasta = headerSeparado[4];
CabeceraFacturacion cabeceraFacturacion = new CabeceraFacturacion();
cabeceraFacturacion.setPrintDate(printDate);
cabeceraFacturacion.setReportIdentifier(reportIdentifier);
cabeceraFacturacion.setTituloReporte(tituloReporte);
cabeceraFacturacion.setFechaDesde(fechaDesde);
cabeceraFacturacion.setFechaHasta(fechaHasta);
}
};
reader.setSkippedLinesCallback(skippedLineCallback);
detalleLineMapper.setLineTokenizer(tokenizerDet);
detalleLineMapper.setFieldSetMapper(new DetalleFieldSetMapper());
detalleLineMapper.afterPropertiesSet();
reader.setLineMapper(detalleLineMapper);
// Test to check if it is saving correctly data in CabeceraFacturacion
CabeceraFacturacion cabeceraFacturacion = new CabeceraFacturacion();
System.out.println("Print Date:"+cabeceraFacturacion.getPrintDate());
System.out.println("Report Identif:
"+cabeceraFacturacion.getReportIdentifier());
return reader;
}
You are correct . You need to use skippedLinesCallback to handle skip lines.
You need to implement LineCallbackHandler interface and add you processing in handleLine method.
LineCallbackHandler Interface passes the raw line content of the lines in the file to be skipped. If linesToSkip is set to 2, then this interface is called twice.
This is how you can define Reader for the same.
Java Config - Spring Batch 4
#Bean
public FlatFileItemReader<POJO> myReader() {
return FlatFileItemReader<pojo>().
.setResource(new FileSystemResource("resources/players.csv"));
.name("myReader")
.delimited()
.delimiter(",")
.names("pro1,pro2,pro3")
.targetType(POJO.class)
.skippedLinesCallback(skippedLinesCallback)
.build();
}

Parsing multi-format & multi line data file in spring batch job

I am writing a spring batch job to process the below mentioned data file and write it into a db.
Sample data file is of this format where I have multiple headers and
each header has a bunch of rows associated with it .
I can have million of records for each header and I can have n number
of headers in a flat file that am processing.My requirement is to
pick a few readers which am concerned .
For all the picked readers I need to pick all the data rows .Each
header and its data format is also different .I can receive either of
these data in my processor and need to write them into my DB.
HDR01
A|41|57|Data1|S|62|Data2|9|N|2017-02-01 18:01:05|2017-02-01 00:00:00
A|41|57|Data1|S|62|Data2|9|N|2017-02-01 18:01:05|2017-02-01 00:00:00
HDR02
A|41|57|Data1|S|62|Data2|9|N|
A|41|57|Data1|S|62|Data2|9|N|
I tried exploring the PatternMatchingCompositeLineMapper where I can
map the different header pattern I have to a tokenizer and
corresponding FieldSetMapper but I need to read the body and not the
header here .
Don't have any footer to Crete a end of line policy of my own as well .
Also tried using AggregateItemReader but don't want to club all the
records of a header before I process them .
Each rows corresponding a header should be processed parallel .
#Bean
public LineMapper myLineMapper() {
PatternMatchingCompositeLineMapper< Domain > mapper = new PatternMatchingCompositeLineMapper<>();
final Map<String, LineTokenizer> tokenizers = new HashMap<String, LineTokenizer>();
tokenizers.put("* HDR01*", new DelimitedLineTokenizer());
tokenizers.put("*HDR02*", new DelimitedLineTokenizer());
tokenizers.put("*", new DelimitedLineTokenizer("|"));
mapper.setTokenizers(tokenizers);
Map<String, FieldSetMapper<VMSFeedStyleInfo>> mappers = new HashMap<String, FieldSetMapper<VMSFeedStyleInfo>>();
try {
mappers.put("* HDR01*", customMapper());
mappers.put("*HDR02*", customMapper());
mappers.put("*", customMapper() );
} catch (Exception e) {
e.printStackTrace();
}
mapper.setFieldSetMappers(mappers);
return mapper;
}
Can somebody help me provide some inputs as to how should I achieve this .

export to excel using existing excel template

I have used Export to Excel Functionality as below:
var table = getReport(param1, param2, param3);
Response.ClearContent();
Response.AddHeader("content-disposition", "attachment;filename=Report.xls");
Response.ContentType = "application/vnd.ms-excel";
System.Web.UI.WebControls.GridView grd = new System.Web.UI.WebControls.GridView();
grd.DataSource = table; // give datasource here
grd.DataBind();
StringWriter swr = new StringWriter();
HtmlTextWriter tw = new HtmlTextWriter(swr);
grd.RenderControl(tw);
Response.Write(swr.ToString());
Response.End();
return View();
where getReport returns data in table format.
Now this creates a new excel file but i want to use existing excel template to create excel output.
How to to add such template to project and how to bind data with it?
The approach you are using now is writing an HTML string to the Response and setting the ContentType to Excel in the Response headers. This does not create a true Excel file, it just forces the browser to launch the file in Excel, and Excel is able to handle HTML files. There is no way to add a template with this approach.
If you want to use an existing Excel file as a template, you will need to use an API that can create and modify real Excel files, such as OfficeWriter. Here is an example of how to bind your DataTable directly to a template file using OfficeWriter.
using SoftArtisans.OfficeWriter.ExcelWriter;
ExcelTemplate xlt = new ExcelTemplate();
//Open the template file
xlt.Open(pathToTemplateFile);
//Create a DataBindingProperties object
DataBindingProperties props = xlt.CreateDataBindingProperties();
// Call the BindData method, passing the DataTable and a name for the datasource
// which will be referenced in the "data markers" in the template
xlt.BindData(table, "DataSource1", props);
//Call the Process method, which binds the data and generates the output file
xlt.Process();
//Stream the file to the Response
xlt.Save(Response, "Report.xls", false);
Disclaimer: I work for SoftArtisans, the makers of OfficeWriter

Spring 3 - export data in csv format

I have some data in the DB and would like to be able to export it to a CSV file and provide a link so the user can download it.
Is there any mechanism provided by Spring 3 for this?
Do you know how could I do that?
I guess it should be easy to build a CSV or any text file view. Here are suggested steps:
Create a View class (you can name it as CSVView) extending org.springframework.web.servlet.view.AbstractView
Override renderMergedOutputModel as follows (pseudo code):
BufferedWriter writer = new BufferedWriter(response.getWriter())
response.setHeader("Content-Disposition","attachment; filename=\"file.csv\"");
myDbData = (Whatever) modelMap.get("modelKey");
some kind of loop {writer.write(myDbData csv row); writer.newLine(); }
finally writer.flush(); writer.close();
After that just return ModelAndView with model (object Whatever with modelKey) and view as CSVView in the controller.

Resources