BIRT csv emitter and EJB - birt

I want to use csv emitter plugin in a Java EE application. Is it possible? I get the following error:
org.eclipse.birt.report.engine.api.UnsupportedFormatException: The output format csv is not supported.
at org.eclipse.birt.report.engine.api.impl.EngineTask.setupRenderOption(EngineTask.java:2047)
at org.eclipse.birt.report.engine.api.impl.RunAndRenderTask.doRun(RunAndRenderTask.java:96)
at org.eclipse.birt.report.engine.api.impl.RunAndRenderTask.run(RunAndRenderTask.java:77)
My code:
protected String generateReportFile(IRunAndRenderTask task, IReportRunnable design, IReportEngine engine, String reportType, String reportPrefix, String baseDir) throws BirtReportGenerationFault {
CSVRenderOption csvOptions = new CSVRenderOption();
csvOptions.setOutputFormat(CSVRenderOption.OUTPUT_FORMAT_CSV);
csvOptions.setOutputFileName("C:/birt/logs/csvTestW.csv");
csvOptions.setShowDatatypeInSecondRow(false);
csvOptions.setExportTableByName("data");
csvOptions.setDelimiter("\t");
csvOptions.setReplaceDelimiterInsideTextWith("-");
task.setRenderOption(csvOptions);
task.setEmitterID("org.eclipse.birt.report.engine.emitter.csv");
try {
task.run();// Error here
} catch (EngineException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
task.close();
return "C:/birt/logs/csvTestW.csv";//fileName;
}
Same code works in a Java SE app.

I had the same problem, but with the pdf format. I solved it by adding org.eclipse.birt.report.engine.emitter.pdf to the plugin Dependencies.

I think the problem here is case of output format passed to in CSVRenderOptions.
instead of using csvOptions.setOutputFormat(CSVRenderOption.OUTPUT_FORMAT_CSV);
try using csvOptions.setOutputFormat("CSV");

Related

Unable to save screenshots in the designated path or attach the screen shot to extentreports in MAC

I have tried many ways to take screenshot on test case failure but nothing works. unable to take a screenshot and attach it to extentreport in MAC os while using selenium.
public void onTestFailure(ITestResult tr)
{
logger=extent.createTest(tr.getName()); // create new entry in the report
logger.log(Status.FAIL,MarkupHelper.createLabel(tr.getName(),ExtentColor.RED)); // send the passed information to the report with GREEN color highlighted
String screenshotPath="./Stest-output/"+tr.getName()+".png";
TakesScreenshot ts = (TakesScreenshot)driver;
File img =ts.getScreenshotAs(OutputType.FILE);
File destination =new File(screenshotPath);
try {
FileUtils.copyFile(img,destination);
logger.addScreenCaptureFromPath(screenshotPath);
} catch (IOException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
if(img != null)
{
System.out.println("Screenshot is below:"+tr.getName());
try {
logger.info("Screenshot is below:" + logger.addScreenCaptureFromPath(screenshotPath));
}
catch (IOException e)
{
e.printStackTrace();
}
}
}
throws null pointer exception when trying to copy the image from source to destination.
Have all the methods available in stack overflow.
You are unable to attach screenshot into extent report html because you forget to call flush() method that appends the report HTML file with all the tests result/screenshots. There must be at least one ended test for anything to be appended to the report.
Note: If flush() is called before any of the ended tests, no information will be appended to report.
#AfterTest
public void tearDown() {
extent.flush();

Runtime.exec() method always returns empty in Spring environment

I tried to execute Runtime.exec(…) method in my Spring application and then always getting empty result from getInputStream() method. But it's working well if I execute as a core java application. Is any other implementations needed to execute in Spring environment? Thanks in advance.
Spring Version : 4.2.1.RELEASE
commons-io Version : 2.4
try {
Process p = Runtime.getRuntime().exec(new String[] {"sh", "-c", "spamc < abcd.txt"});
String spamResponsexx = IOUtils.toString(p.getInputStream());
log.debug("Spam Response : " + spamResponsexx);
} catch (Exception e) {
e.printStackTrace();
log.error("", e);
}
Maybe User privileges issues. I'm also worked in the similar type of scenario.
In this case both services should be run as same user(XYZ)
Otherwise 'spamc' library files location should be point to your user(XYZ) HOME path '.bashrc' and reset '.bashrc'

Spring Integration and returning schema validation errors

We are using Spring Integration to process a JSON payload passed into a RESTful endpoint. As part of this flow we are using a filter to validate the JSON:
.filter(schemaValidationFilter, s -> s
.discardFlow(f -> f
.handle(message -> {
throw new SchemaValidationException(message);
}))
)
This works great. However, if the validation fails we want to capture the parsing error and return that to the user so they can act on the error. Here is the overridden accept method in the SchemaValidationFilter class:
#Override
public boolean accept(Message<?> message) {
Assert.notNull(message);
Assert.isTrue(message.getHeaders().containsKey(TYPE_NAME));
String historyType = (String)message.getHeaders().get(TYPE_NAME);
JSONObject payload = (JSONObject) message.getPayload();
String jsonString = payload.toJSONString();
try {
ProcessingReport report = schemaValidator.validate(historyType, payload);
return report.isSuccess();
} catch (IOException | ProcessingException e) {
throw new MessagingException(message, e);
}
}
What we have done is in the catch block we throw a MessageException which seems to solve the problem. However this seems to break what a filter should do (simply return a true or false).
Is there a best practice for passing the error details from the filter to the client? Is the filter the right solution for this use case?
Thanks for your help!
John
I'd say you go correct way. Please, refer to the XmlValidatingMessageSelector, so your JsonValidatingMessageSelector should be similar and must follow the same design.
Since we have a throwExceptionOnRejection option we always can be sure that throwing Exception instead of just true/false is correct behavior.
What Gary says is good, too, but according to the existing logic in that MessageSelector impl we can go ahead with the same and continue to use .filter(), but, of course, already without .discardFlow(), because we won't send invalid message to the discardChannel.
When your JsonValidatingMessageSelector is ready, feel free to contribute it back to the Framework!
It's probably more correct to do the validation in a <service-activator/>...
public Message<?> validate(Message<?> message) {
...
try {
ProcessingReport report = schemaValidator.validate(historyType, payload);
return message;
}
catch (IOException | ProcessingException e) {
throw new MessagingException(message, e);
}
}
...since you're never really filtering.

Profiling Hadoop

UPDATE:
I had mailed Shevek, founder of Karmasphere, for help. He had given a presentation on hadoop profiling at ApacheCon 2011. He advised to look for Throwable. Catch block for Throwable shows :
localhost: java.lang.IncompatibleClassChangeError: class com.kannan.mentor.sourcewalker.ClassInfoGatherer has interface org.objectweb.asm.ClassVisitor as super class
localhost: at java.lang.ClassLoader.defineClass1(Native Method)
localhost: at java.lang.ClassLoader.defineClass(ClassLoader.java:792)
Hadoop has ASM3.2 jar and I am using 5.0. In 5.0, ClassVisitor is a Super Class and in 3.2 it is an Interface. I am planning to change my profiler to 3.2. Is there any other better way to fix this issue?
BTW, Shevek is super cool. A Founder and CEO, responding to some
anonymous guys emails. Imagine that.
END UPDATE
I am trying to profile Hadoop (JobTracker, Name Node, Data Node etc). Created a profiler using ASM5. Tested it on Spring and everything works fine.
Then tested the profiler on Hadoop in pseudo-distributed mode.
#Override
public byte[] transform(ClassLoader loader, String className,
Class<?> classBeingRedefined, ProtectionDomain protectionDomain,
byte[] classfileBuffer) throws IllegalClassFormatException {
try {
/*1*/ System.out.println(" inside transformer " + className);
ClassReader cr = new ClassReader(classfileBuffer);
ClassWriter cw = new ClassWriter(ClassWriter.COMPUTE_MAXS);
/* c-start */ // CheckClassAdapter cxa = new CheckClassAdapter(cw);
ClassVisitor cv = new ClassInfoGatherer(cw);
/* c-end */ cr.accept(cv, ClassReader.EXPAND_FRAMES);
byte[] b = cw.toByteArray();
/*2*/System.out.println(" inside transformer - returning" + b.length);
return b;
} catch (Exception e) {
System.out.println( " class might not be found " + e.getMessage()) ;
try {
throw new ClassNotFoundException(className, e);
} catch (ClassNotFoundException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
}
return null;
}
I can see the first sysout statement printed but not the second one. There is no error either. If I comment out from /* c-start / to / c-stop*/ and replace cw with classFileBuffer, I can see the second sysout statement. The moment I uncomment line
ClassVisitor cv = new ClassInfoGatherer(cw);
ClassInfoGatherer constructor:
public ClassInfoGatherer(ClassVisitor cv) {
super(ASM5, cv);
}
I am not seeing the second sysout statement.
What am i doing wrong here. Is Hadoop swallowing my sysouts. Tried sys err too. Even if that is the case why can i see the first sysout statement?
Any suggestion would be helpful. I think I am missing something simple and obvious here...but can't figure it out.
following lines were added to hadoop-env.sh
export HADOOP_NAMENODE_OPTS="-javaagent:path to jar $HADOOP_NAMENODE_OPTS"
export HADOOP_SECONDARYNAMENODE_OPTS="-path to jar $HADOOP_SECONDARYNAMENODE_OPTS"
export HADOOP_DATANODE_OPTS="-javaagent:path to jar $HADOOP_DATANODE_OPTS"
export HADOOP_BALANCER_OPTS="-javaagent:path to jar $HADOOP_BALANCER_OPTS"
export HADOOP_JOBTRACKER_OPTS="-javaagent:path to jar $HADOOP_JOBTRACKER_OPTS"
Hadoop had asm 3.2 and I was using ASM 5. In ASM5, ClassVisitor is a Superclass and in 3.2 it is an interface. For some reason, the error was a Throwable (credits to Shevek) and catch block was only catching exceptions. The throwable error wasn't captured in any of hadoop logs. So, it was very tough to debug.
Used jar jar links to fix asm version issues and everything works fine now.
If you are using Hadoop and something is not working and no logs to show any error's, then please try to catch Throwable.
Arun

Accessing Distributed Cache in Pig StoreFunc

I have looked at all the other threads on this topic and still have not found an answer...
Put simply, I want to access hadoop distributed cache from a Pig StoreFunc, and NOT from within a UDF directly.
Relevant PIG code lines:
DEFINE CustomStorage KeyValStorage('param1','param2','param3');
...
STORE BLAH INTO /path/ using CustomStorage();
Relevant Java Code:
public class KeyValStorage<M extends Message> extends BaseStoreFunc /* ElephantBird Storage which inherits from StoreFunc */ {
...
public KeyValStorage(String param1, String param2, String param3) {
...
try {
InputStream is = new FileInputStream(configName);
try {
prop.load(is);
} catch (IOException e) {
System.out.println("PROPERTY LOADING FAILED");
e.printStackTrace();
}
} catch (FileNotFoundException e) {
System.out.println("FILE NOT FOUND");
e.printStackTrace();
}
}
...
}
configName is the name of the LOCAL file that I should be able to read from distributed cache, however, I am getting a FileNotFoundException. When I use the EXACT same code from within a PIG UDF directly, the file is found, so I know the file is being shipped via distributed cache. I set the appropriate param to make sure this happens:
<property><name>mapred.cache.files</name><value>/path/to/file/file.properties#configName</value></property>
Any ideas how I can get around this?
Thanks!
StroreFunc's constructor is called both at frontend and backend. When it is called from the frontend, (before the job is launched) then you'll get FileNotFoundException because at this point the files from the distributed cache are not yet copied to the nodes' local disk.
You may check whether you are at the backend (when the job is being executed) and load the file only in this case e.g:
DEFINE CustomStorage KeyValStorage('param1','param2','param3');
set mapreduce.job.cache.files hdfs://host/user/cache/file.txt#config
...
STORE BLAH INTO /path/ using CustomStorage();
public KeyValStorage(String param1, String param2, String param3) {
...
try {
if (!UDFContext.getUDFContext().isFrontend()) {
InputStream is = new FileInputStream("./config");
BufferedReader br = new BufferedReader(new InputStreamReader(is));
...
...
}

Resources