Birt Images with IText never ends - image

I'm working with Birt reports and, when i want to generate a pdf file, it never ends. The problem is in the line.
IRunAndRenderTask.run. I have no exception when I create the IBirtEngine.
Here is the code to create the BirtEngine and the reports designs.
private static IReportEngine birtEngine = null;
private static IReportRunnable examenAuditifReportDesign = null;
private static IReportRunnable bilanCollectifReportDesign = null;
private static Properties configProps = new Properties();
static{
loadEngineProps();//Read the birt configuration properties
EngineConfig config = new EngineConfig();
if( configProps != null){
config.setLogConfig(configProps.getProperty("logDirectory"), Level.INFO);
config.setBIRTHome(configProps.getProperty("birtHome"));
config.setResourcePath(configProps.getProperty("ressourcePath"));
config.setEngineHome(configProps.getProperty("birtHome"));
config.setProperty("birtReportsHome", configProps.getProperty("birtReportsHome"));
}
try {
RegistryProviderFactory.releaseDefault();
Platform.startup( config );
} catch ( BirtException e ) {
e.printStackTrace( );
}
IReportEngineFactory factory = (IReportEngineFactory) Platform.createFactoryObject(IReportEngineFactory.EXTENSION_REPORT_ENGINE_FACTORY );
birtEngine = factory.createReportEngine( config );
try {
examenAuditifReportDesign = birtEngine.openReportDesign(
new FileInputStream(birtEngine.getConfig().getProperty("birtReportsHome") + "/examenAuditif.rptdesign"));
bilanCollectifReportDesign = birtEngine.openReportDesign(
new FileInputStream(birtEngine.getConfig().getProperty("birtReportsHome") + "/bilanCollectif.rptdesign"));
} catch (EngineException e) {
e.printStackTrace();
} catch (FileNotFoundException e) {
e.printStackTrace();
}
}
And here's the code to execute the reports.
IRunAndRenderTask task = birtEngine.createRunAndRenderTask(examenAuditifReportDesign);
task.setParameterValue("RPT_ID_Travailleur", t.getId());
task.setParameterValue("RPT_LATEST_HA", params.getId_latest_ha());
task.setParameterValue("RPT_LATEST_EXAMN", params.getId_latest_examen());
task.setParameterValue("RPT_PREVIOUS_HA", params.getId_previous_ha());
task.setParameterValue("RPT_PREVIOUS_EXAMN", params.getId_previous_examen());
PDFRenderOption options = new PDFRenderOption();
options.setOutputStream(outputStream);
options.setOutputFormat("pdf");
task.setRenderOption(options);
task.run();
task.close();
and is in the task.run(); line that it takes forever, and i tried for about 1 hour or 1 hour and a half and it does not end.
If anyone can help it will be really apreciated.
Bye and thank you.

If this report works in a birt Eclipse designer then it should work with a birt runtime API, the issue is in your code.
You need to close yourself the outputStream sent to the PDFRenderOption object, when the task has terminated
Add this line to your code:
options.setSupportedImageFormats("PNG;JPG;BMP");

I found the problem and it wasn't neither of the two API's.
We were trying to generate a pdf file for worker, and we were zipping all the pdf file of all the workers, and this zip was saved in a blob in the database. That was the problem.
If the zip is saved in the File System it works ok.
thank you all.

Related

Is there a way to batch upload a collection of InputStreams to Amazon S3 using the Java SDK?

I am aware of the TransferManager and the .uploadFileList() and .uploadFileDirectory() methods, however they accept java.io.File types as arguments. I have a collection of byte array input streams containing jpeg image data. I don't want to create in-memory files to store this data before I upload it either.
So what I need is essentially what the S3 client's PutObjectRequest does but for a collection of InputStream objects. Also, if one upload fails, I want to abort the whole thing and not upload anything, much like how a database transaction will reverse the changes if something goes wrong along the way.
Is this possible with the Java SDK?
Before I share an answer, please consider upgrading...
fyi - TransferManager is deprecated, now supported as TransferManagerBuilder in JAVA AWS SDK, please consider upgrading if TransferManagerBuilder Object suits your needs.
now since you asked about TransferManager, you could either 1) copy the code below and replace the functionality/arguments with your custom in memory handling of the input stream and handle it in your custom function... or; 2) further below is another sample, try to use this as-is...
Github source modify with with inputstream and issue listed here
private def uploadFile(is: InputStream, s3ObjectName: String, metadata: ObjectMetadata) = {
try {
val putObjectRequest = new PutObjectRequest(bucketName, s3ObjectName,
is, metadata)
// TransferManager supports asynchronous uploads and downloads
val upload = transferManager.upload(putObjectRequest)
upload.addProgressListener(ExceptionReporter.wrap(UploadProgressListener(putObjectRequest)))
} catch {
case e: Exception => throw new RuntimeException(e)
}
}
Bonus, Nice custom answer here using sequence input streams
public void combineFiles() {
List<String> files = getFiles();
long totalFileSize = files.stream()
.map(this::getContentLength)
.reduce(0L, (f, s) -> f + s);
try {
try (InputStream partialFile = new SequenceInputStream(getInputStreamEnumeration(files))) {
ObjectMetadata resultFileMetadata = new ObjectMetadata();
resultFileMetadata.setContentLength(totalFileSize);
s3Client.putObject("bucketName", "resultFilePath", partialFile, resultFileMetadata);
}
} catch (IOException e) {
LOG.error("An error occurred while combining files. {}", e);
}
}
private Enumeration<? extends InputStream> getInputStreamEnumeration(List<String> files) {
return new Enumeration<InputStream>() {
private Iterator<String> fileNamesIterator = files.iterator();
#Override
public boolean hasMoreElements() {
return fileNamesIterator.hasNext();
}
#Override
public InputStream nextElement() {
try {
return new FileInputStream(Paths.get(fileNamesIterator.next()).toFile());
} catch (FileNotFoundException e) {
System.err.println(e.getMessage());
throw new RuntimeException(e);
}
}
};
}

How to read and write files in a reactive way using InputStreamand OutputStream

I am trying to read an Excel file in manipulate it or add new data to it and write it back out. I am also trying to do this a complete reactive process using Flux and Mono. The Idea is to return the resulting file or bytearray via a webservice.
My question is how do I get a InputStream and OutputStream in a non blocking way?
I am using the Apache Poi library to read and generate the Excel File.
I currently have a solution based around a mix of Mono.fromCallable() and Blocking code getting the Input Stream.
For example the webservice part is as follows.
#GetMapping(value = API_BASE_PATH + "/download", produces = "application/vnd.ms-excel")
public Mono<ByteArrayResource> download() {
Flux<TimeKeepingEntry> createExcel = excelExport.createDocument(false);
return createExcel.then(Mono.fromCallable(() -> {
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
excelExport.getWb().write(outputStream);
return new ByteArrayResource(outputStream.toByteArray());
}).subscribeOn(Schedulers.elastic()));
}
And the Processing of the file:
public Flux<TimeKeepingEntry> createDocument(boolean all) {
Flux<TimeKeepingEntry> entries = null;
try {
InputStream inputStream = new ClassPathResource("Timesheet Template.xlsx").getInputStream();
wb = WorkbookFactory.create(inputStream);
Sheet sheet = wb.getSheetAt(0);
log.info("Created document");
if (all) {
//all entries
} else {
entries = service.findByMonth(currentMonthName).log("Excel Export - retrievedMonths").sort(Comparator.comparing(TimeKeepingEntry::getDateOfMonth)).doOnNext(timeKeepingEntry-> {
this.populateEntry(sheet, timeKeepingEntry);
});
}
} catch (IOException e) {
log.error("Error Importing File", e);
}
return entries;
}
This works well enough but not very in line with Flux and Mono. Some guidance here would be good. I would prefer to have the whole sequence non-blocking.
Unfortunately the WorkbookFactory.create() operation is blocking, so you have to perform that operation using imperative code. However fetching each timeKeepingEntry can be done reactively. Your code would looks something like this:
public Flux<TimeKeepingEntry> createDocument() {
return Flux.generate(
this::getWorkbookSheet,
(sheet, sink) -> {
sink.next(getNextTimeKeepingEntryFrom(sheet));
},
this::closeWorkbook);
}
This will keep the workbook in memory, but will fetch each entry on demand when the elements of the Flux are requested.

How to get rid of NullPointerException in Flume Interceptor?

I have an interceptor written for Flume code is below:
public Event intercept(Event event) {
byte[] xmlstr = event.getBody();
InputStream instr = new ByteArrayInputStream(xmlstr);
//TransformerFactory factory = TransformerFactory.newInstance(TRANSFORMER_FACTORY_CLASS,TRANSFORMER_FACTORY_CLASS.getClass().getClassLoader());
TransformerFactory factory = TransformerFactory.newInstance();
Source xslt = new StreamSource(new File("removeNs.xslt"));
Transformer transformer = null;
try {
transformer = factory.newTransformer(xslt);
} catch (TransformerConfigurationException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
Source text = new StreamSource(instr);
OutputStream ostr = new ByteArrayOutputStream();
try {
transformer.transform(text, new StreamResult(ostr));
} catch (TransformerException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
event.setBody(ostr.toString().getBytes());
return event;
}
I'm removing NameSpace from my source xml with removeNs.xslt file. So that I can store that data into HDFS and later put into hive. When my interceptor run it throw below error :
ERROR org.apache.flume.source.jms.JMSSource: Unexpected error processing events
java.lang.NullPointerException
at test.intercepter.App.intercept(App.java:59)
at test.intercepter.App.intercept(App.java:82)
at org.apache.flume.interceptor.InterceptorChain.intercept(InterceptorChain.java:62)
at org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:146)
at org.apache.flume.source.jms.JMSSource.doProcess(JMSSource.java:258)
at org.apache.flume.source.AbstractPollableSource.process(AbstractPollableSource.java:54)
at org.apache.flume.source.PollableSourceRunner$PollingRunner.run(PollableSourceRunner.java:139)
at java.lang.Thread.run(Thread.java:745)*
Can you suggest me what and where is the problem?
I found the solution. The problem was not anything else than new File("removeNs.xslt"). It was not able to find the location as I's not sure where to keep this file but later I get the flume agent path but as soon as I restart the flume agent it deletes all files which I kept in the flume agent dir. So I changed the code and kept the file material into my java code.

loading a pdf in-browser from a file in the server file system?

How can I get a pdf located in a file in a server's directory structure to load in a browser for users of a Spring MVC application?
I have googled this and found postings about how to generate PDFs, but their answers do not work in this situation. For example, this other posting is not relevant because res.setContentType("application/pdf"); in my code below does not solve the problem. Also, this other posting describes how to do it from a database but does not show full working controller code. Other postings had similar problems that caused them not to work in this case.
I need to simply serve up a file (not from a database) and have it been viewable by a user in their browser. The best I have come up with is the code below, which asks the user to download the PDF or to view it in a separate application outside the browser. What specific changes can I make to the specific code below so that the user automatically sees the PDF content inside their browser when they click on the link instead of being prompted to download it?
#RequestMapping(value = "/test-pdf")
public void generatePdf(HttpServletRequest req,HttpServletResponse res){
res.setContentType("application/pdf");
res.setHeader("Content-Disposition", "attachment;filename=report.pdf");
ServletOutputStream outStream=null;
try {
BufferedInputStream bis = new BufferedInputStream(
new FileInputStream(new File("/path/to", "nameOfThe.pdf")));
/*ServletOutputStream*/ outStream = res.getOutputStream();
//to make it easier to change to 8 or 16 KBs
int FILE_CHUNK_SIZE = 1024 * 4;
byte[] chunk = new byte[FILE_CHUNK_SIZE];
int bytesRead = 0;
while ((bytesRead = bis.read(chunk)) != -1) {outStream.write(chunk, 0, bytesRead);}
bis.close();
outStream.flush();
outStream.close();
}
catch (Exception e) {e.printStackTrace();}
}
Change
res.setHeader("Content-Disposition", "attachment;filename=report.pdf");
To
res.setHeader("Content-Disposition", "inline;filename=report.pdf");
You should also set the Content Length
FileCopyUtils is handy:
#Controller
public class FileController {
#RequestMapping("/report")
void getFile(HttpServletResponse response) throws IOException {
String fileName = "report.pdf";
String path = "/path/to/" + fileName;
File file = new File(path);
FileInputStream inputStream = new FileInputStream(file);
response.setContentType("application/pdf");
response.setContentLength((int) file.length());
response.setHeader("Content-Disposition", "inline;filename=\"" + fileName + "\"");
FileCopyUtils.copy(inputStream, response.getOutputStream());
}
}

GWT Spring Jasper Reports

I have an application built in GWT and Spring. I am trying to generate Jasper Reports on the server side. However when I execute the functionality, it hangs/stops at jasperDesign = JRXmlLoader.load(file_name); and does not respond or throw an exception. This means that my RPC call that triggers the report generation function does not return a response either (so the application hangs). However when I run the function in a normal java application it generates a report without any problem. What could be the issue? I am using JasperReports version 5.6.0. My java function:
public StandardServerResponse printReport(List<Object> items) {
StandardServerResponse response = new StandardServerResponse();
String file_name = null;
Map<String, Object> parameters;
JasperDesign jasperDesign;
JasperReport jasperReport;
JasperPrint jasperPrint;
try {
for (Object obj: items) {
parameters = new HashMap<String, Object>();
parameters.put("id_in", obj.getId());
file_name = "G:\\myreport.jrxml";
jasperDesign = JRXmlLoader.load(file_name); //application stops here
jasperReport = JasperCompileManager.compileReport(jasperDesign);
jasperPrint = JasperFillManager.fillReport(jasperReport, parameters, dataSource.getConnection());
JasperExportManager.exportReportToPdfFile(jasperPrint, "G:\\report.pdf");
}
response.setSuccess(true);
} catch (Exception ex) {
ex.printStackTrace();
response.setSuccess(false);
}
return response;
}
I finally solved my problem after many long days of debugging :-).
I had these two jars in my WEB-INF/lib folder.
jasperreports-functions-5.6.0-SNAPSHOT.jar
jasperreports-fonts-5.6.0.jar
I removed them and the app worked. I still don't understand why they would cause a problem though.
I also changed my code to work with a .jasper extension and directly called JasperRunManager.runReportToPdfFile(file_name, "S:\\output_report.pdf", parameters, connection);
Thanks a lot Darshan Lila for trying, I really appreciate. Hope this helps someone.

Resources