JasperReports Server "The job failed to execute" - jdbc

I am new to JasperReports Server and have searched for a resolve to this issue. I have found nothing. I inherited this server and the reports are not running as scheduled.
Job: 3rd (ID: 61)
Report unit: /reports/Scheduled/00_Schedule_Primer
Quartz Job: ReportJobs.job_61
Quartz Trigger: ReportJobs.trigger_62_1
Exceptions:
An error occurred while executing the report.
com.jaspersoft.jasperserver.api.JSException: jsexception.error.creating.connection
at com.jaspersoft.jasperserver.api.engine.jasperreports.service.impl.JdbcDataSourceService.createConnection(JdbcDataSourceService.java:62)
at com.jaspersoft.jasperserver.api.engine.jasperreports.service.impl.BaseJdbcDataSource.setReportParameterValues(BaseJdbcDataSource.java:52)
at com.jaspersoft.jasperserver.api.engine.jasperreports.service.impl.JdbcDataSourceService.setReportParameterValues(JdbcDataSourceService.java:67)
at com.jaspersoft.jasperserver.api.engine.jasperreports.service.impl.EngineServiceImpl.fillReport(EngineServiceImpl.java:743)
at com.jaspersoft.jasperserver.api.engine.jasperreports.service.impl.EngineServiceImpl.fillReport(EngineServiceImpl.java:367)
at com.jaspersoft.jasperserver.api.engine.jasperreports.service.impl.EngineServiceImpl.executeReport(EngineServiceImpl.java:876)
at com.jaspersoft.jasperserver.api.engine.jasperreports.domain.impl.ReportUnitRequest.execute(ReportUnitRequest.java:60)
at com.jaspersoft.jasperserver.api.engine.jasperreports.service.impl.EngineServiceImpl.execute(EngineServiceImpl.java:301)
at com.jaspersoft.jasperserver.api.engine.scheduling.quartz.ReportExecutionJob.executeReport(ReportExecutionJob.java:444)
at com.jaspersoft.jasperserver.api.engine.scheduling.quartz.ReportExecutionJob.executeAndSendReport(ReportExecutionJob.java:372)
at com.jaspersoft.jasperserver.api.engine.scheduling.quartz.ReportExecutionJob.execute(ReportExecutionJob.java:188)
at org.quartz.core.JobRunShell.run(JobRunShell.java:195)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:520)
Caused by: java.sql.SQLException: Middleware connect fail:No RPC Connection active.
at com.ibm.u2.jdbc.UniJDBCMsgFactory.createException(UniJDBCMsgFactory.java:113)
at com.ibm.u2.jdbc.UniJDBCExceptionSupport.addAndThrowException(UniJDBCExceptionSupport.java:62)
at com.ibm.u2.jdbc.UniJDBCProtocolU2Impl.connect(UniJDBCProtocolU2Impl.java:746)
at com.ibm.u2.jdbc.UniJDBCProtocolU2Impl.executeOpenDatabase(UniJDBCProtocolU2Impl.java:243)
at com.ibm.u2.jdbc.UniJDBCConnectionImpl.<init>(UniJDBCConnectionImpl.java:116)
at com.ibm.u2.jdbc.UniJDBCDriver.connect(UniJDBCDriver.java:111)
at java.sql.DriverManager.getConnection(DriverManager.java:582)
at java.sql.DriverManager.getConnection(DriverManager.java:185)
at org.apache.commons.dbcp.DriverManagerConnectionFactory.createConnection(DriverManagerConnectionFactory.java:48)
at org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:290)
at org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:771)
at org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:95)
at com.jaspersoft.jasperserver.api.engine.jasperreports.service.impl.JdbcDataSourceService.createConnection(JdbcDataSourceService.java:58)
... 12 more
I don't even have a clue where to start troubleshooting this. ANY help would be awesome!

It seems your connection url, username and password values are not proper.If these three things are proper then the connection will be created.Check in your Jasper server configuration.

Related

Apache Nifi Web Server keeps failing to start with Decryption exception

I have a setup in which NiFi Web Server suddenly started failing to start when upgrading from 1.15.3 to 1.16.1 version. The following exception keeps occurring on the Apache NiFi Cluster:
2022-05-11 22:53:40,570 WARN [main] org.apache.nifi.web.server.JettyServer Failed to start web server... shutting down.
org.apache.nifi.encrypt.EncryptionException: Decryption Failed with Algorithm [PBEWITHMD5AND256BITAES-CBC-OPENSSL]
at org.apache.nifi.encrypt.CipherPropertyEncryptor.decrypt(CipherPropertyEncryptor.java:78)
at org.apache.nifi.fingerprint.FingerprintFactory.decrypt(FingerprintFactory.java:931)
at org.apache.nifi.fingerprint.FingerprintFactory.getLoggableRepresentationOfSensitiveValue(FingerprintFactory.java:561)
at org.apache.nifi.fingerprint.FingerprintFactory.addParameter(FingerprintFactory.java:330)
at org.apache.nifi.fingerprint.FingerprintFactory.addParameterContext(FingerprintFactory.java:302)
at org.apache.nifi.fingerprint.FingerprintFactory.addFlowControllerFingerprint(FingerprintFactory.java:210)
at org.apache.nifi.fingerprint.FingerprintFactory.createFingerprint(FingerprintFactory.java:153)
at org.apache.nifi.fingerprint.FingerprintFactory.createFingerprint(FingerprintFactory.java:127)
at org.apache.nifi.controller.inheritance.FlowFingerprintCheck.checkInheritability(FlowFingerprintCheck.java:45)
at org.apache.nifi.controller.XmlFlowSynchronizer.sync(XmlFlowSynchronizer.java:200)
at org.apache.nifi.controller.serialization.StandardFlowSynchronizer.sync(StandardFlowSynchronizer.java:43)
at org.apache.nifi.controller.FlowController.synchronize(FlowController.java:1524)
at org.apache.nifi.persistence.StandardFlowConfigurationDAO.load(StandardFlowConfigurationDAO.java:104)
at org.apache.nifi.controller.StandardFlowService.loadFromBytes(StandardFlowService.java:815)
at org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:457)
at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:1086)
at org.apache.nifi.NiFi.<init>(NiFi.java:170)
at org.apache.nifi.NiFi.<init>(NiFi.java:82)
at org.apache.nifi.NiFi.main(NiFi.java:330)
Caused by: javax.crypto.BadPaddingException: pad block corrupted
at org.bouncycastle.jcajce.provider.symmetric.util.BaseBlockCipher$BufferedGenericBlockCipher.doFinal(Unknown Source)
at org.bouncycastle.jcajce.provider.symmetric.util.BaseBlockCipher.engineDoFinal(Unknown Source)
at javax.crypto.Cipher.doFinal(Cipher.java:2168)
at org.apache.nifi.encrypt.CipherPropertyEncryptor.decrypt(CipherPropertyEncryptor.java:74)
... 18 common frames omitted
relevant nifi.properties:
nifi.sensitive.props.key=<hidden>
nifi.sensitive.props.key.protected=
nifi.sensitive.props.algorithm=PBEWITHMD5AND256BITAES-CBC-OPENSSL
nifi.sensitive.props.additional.keys=
I have already tried to tear it all down and re-install 1.15.3 with not any other changes, but the same issue still persists. Can someone please share any ideas if there are any on how to fix this?

worker is getting restarted continuously with closedchannel exception in supervisor

worker which is in one of the supervisor is getting restarted continuously and getting Closedchannel exception . But if run the same topology in another storm cluster which is in another environment , it is running without giving any errors.
Below is the error i can see from Storm UI.
java.lang.RuntimeException: java.nio.channels.ClosedChannelException at org.apache.storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:103) at org.apache.storm.kafka.ZkCoordinator.getMyManagedPartitions(ZkCoordinator.java:69) at org.apache.storm.kafka.KafkaSpout.nextTuple(KafkaSpout.java:129) at org.apache.storm.daemon.executor$fn__7990$fn__8005$fn__8036.invoke(executor.clj:648) at org.apache.storm.util$async_loop$fn__624.invoke(util.clj:484) at clojure.lang.AFn.run(AFn.java:22) at java.lang.Thread.run(Thread.java:745) Caused by: java.nio.channels.ClosedChannelException at kafka.network.BlockingChannel.send(BlockingChannel.scala:100) at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:78) at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:68) at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:127) at kafka.javaapi.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:79) at org.apache.storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:75) at org.apache.storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:65) at org.apache.storm.kafka.PartitionManager.(PartitionManager.java:94) at org.apache.storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:98) ... 6 mo
Can any one please help me to find out the exact issue.Please let me know if need any more information.
I faced this issue and problem was that ZooKeeper host names not being resolved from worker host.

Connection refused error in worker logs - apache storm

I see the below error in worker logs, it happens almost every milliseconds, but the cluster is running fine, I wanted to know what does these error mean and any idea on why this would occur.
This happens on all the worker nodes
2016-05-12T15:32:53.514-0500 b.s.m.n.Client [ERROR] connection attempt 3 to Netty-Client-xxxxx.hq.abc.com/xx.xx.xxx.xx:6700 failed: java.net.ConnectException: Connection refused: xxxxx.hq.abc.com/xx.xx.xxx.xxx:6700
And after some time i see this
2016-05-12T15:44:25.940-0500 b.s.m.n.Client [ERROR] discarding 1 messages because the Netty client to Netty-Client-xxxxx.hq.abc.com/xx.xx.xxx.xxx:6700 is being closed
After struggling forever with this problem, I found that setting storm.local.hostname property in the storm.yaml file solved the issue for me. On my laptop, I set storm.zookeeper.servers, nimbus.host and storm.local.hostname all to "localhost".
I am using version 0.10.2 of storm.

Zabbix JMX Tomcat8 monitoring fails

I'm trying to monitor Tomcat8 with JDK8 using JMX.
I have setup my agents and modified the startup.sh.
On my zabbix_java_gateway.log I get the following exception:
WARN com.zabbix.gateway.SocketProcessor - error processing request
com.zabbix.gateway.ZabbixException: java.net.SocketTimeoutException:
connection timed out:
service:jmx:rmi:///jndi/rmi://server1.example.com:10052/jmxrmi
at com.zabbix.gateway.JMXItemChecker.getValues(JMXItemChecker.java:97)
~[zabbix-java-gateway-2.4.7.jar:na]
at com.zabbix.gateway.SocketProcessor.run(SocketProcessor.java:63)
~[zabbix-java-gateway-2.4.7.jar:na]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[na:1.8.0_71]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[na:1.8.0_71]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_71] Caused by: java.net.SocketTimeoutException: connection timed out:
service:jmx:rmi:///jndi/rmi://server1.example.com:10052/jmxrmi
at com.zabbix.gateway.ZabbixJMXConnectorFactory.connect(ZabbixJMXConnectorFactory.java:123)
~[zabbix-java-gateway-2.4.7.jar:na]
at com.zabbix.gateway.JMXItemChecker.getValues(JMXItemChecker.java:89)
~[zabbix-java-gateway-2.4.7.jar:na]
... 4 common frames omitted
On my startup.sh I added the following to the CATALINA_OPTS
-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=10052 -Dcom.sun.management.jmxremote.authenticate=true -Dcom.sun.management.jmxremote.password.file=/opt/tomcat-latest/conf/jmxremote.password
-Dcom.sun.management.jmxremote.access.file=/opt/tomcat-latest/conf/jmxremote.access
-Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=server1.example.com
My zabbix_agentd.conf contains the following:
PidFile=/tmp/zabbix_agentd.pid LogFile=/var/log/zabbix_agentd.log
LogFileSize=1 DebugLevel=3 Server=monitor.example.com
Hostname=server1.example.com ListenPort=10050 StartAgents=5 Timeout=30
I have already done the following:
successfully connected to the server using jconsole
remove authentication
telnet the server over port 10050 / 10052
The weird part is that the same setup works well for Tomcat6 with JDK7.
EDIT 1
I've updated the JDK version on the zabbix server to be newer than the JDK installed on my JAVA nodes - still same result - it ends with
ZBX_TCP_READ() failed: [4] Interrupted system call
UPDATE
So I figured it out eventually.
I had on my tomcat configuration file -Djava.rmi.server.hostname=server1.example.com
I miss understood that the hostname should be set to the monitoring server and to the monitored server hostname.
Apparently, there's a bug on Tomcat 6 and this directive does not work.
Remove it solved the problem completely.
Thanks,
Liron

WSO2 DAS 3.0.0 with API Manager 1.9.0 not working

I am using trying to use DAS 3.0.0 as replacement of BAM with WSO2 API Manager 1.9.0/1.9.1 with Oracle for WSO2AM_STATS_DB.
I am following http://blog.rukspot.com/2015/09/publishing-apim-runtime-statistics-to.html
I can see data in DAS's carbon dashboard in Data Explorer tables ORG_WSO2_APIMGT_STATISTICS_REQUEST and ORG_WSO2_APIMGT_STATISTICS_RESPONSE.
But data is not stored in Oracle. Therefore I am not able to see Statistics in publisher of AM. It keeps saying "Data publishing is enabled. Generate some traffic to see statistics."
I am getting following error in log:
[2015-12-08 13:00:00,022] INFO {org.wso2.carbon.analytics.spark.core.AnalyticsT
ask} - Executing the schedule task for: APIM_STAT_script for tenant id: -1234
[2015-12-08 13:00:00,037] INFO {org.wso2.carbon.analytics.spark.core.AnalyticsT
ask} - Executing the schedule task for: Throttle_script for tenant id: -1234
Exception in thread "dag-scheduler-event-loop" java.lang.NoClassDefFoundError: o
rg/xerial/snappy/SnappyInputStream
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:274)
at org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.sc
ala:66)
at org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.sc
ala:60)
at org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcas
t$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
at org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.s
cala:80)
at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(Torre
ntBroadcastFactory.scala:34)
at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastMan
ager.scala:62)
at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1291)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DA
GScheduler$$submitMissingTasks(DAGScheduler.scala:874)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DA
GScheduler$$submitStage(DAGScheduler.scala:815)
at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGSchedul
er.scala:799)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAG
Scheduler.scala:1426)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAG
Scheduler.scala:1418)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
Caused by: java.lang.ClassNotFoundException: org.xerial.snappy.SnappyInputStream
cannot be found by spark-core_2.10_1.4.1.wso2v1
at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(Bundl
eLoader.java:501)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.
java:421)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.
java:412)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(De
faultClassLoader.java:107)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 15 more
Am I missing something?
Can anyone please help me to figure out this issue?
Thanks in advance.
Move all the libraries(jars) into your project's /WEB-INF/lib. Now all the libraries/jars under /WEB-INF/lib will come under classpath.
use snappy-java jar file and it will work as you want.

Resources