WSO2 DAS 3.0.0 with API Manager 1.9.0 not working - oracle

I am using trying to use DAS 3.0.0 as replacement of BAM with WSO2 API Manager 1.9.0/1.9.1 with Oracle for WSO2AM_STATS_DB.
I am following http://blog.rukspot.com/2015/09/publishing-apim-runtime-statistics-to.html
I can see data in DAS's carbon dashboard in Data Explorer tables ORG_WSO2_APIMGT_STATISTICS_REQUEST and ORG_WSO2_APIMGT_STATISTICS_RESPONSE.
But data is not stored in Oracle. Therefore I am not able to see Statistics in publisher of AM. It keeps saying "Data publishing is enabled. Generate some traffic to see statistics."
I am getting following error in log:
[2015-12-08 13:00:00,022] INFO {org.wso2.carbon.analytics.spark.core.AnalyticsT
ask} - Executing the schedule task for: APIM_STAT_script for tenant id: -1234
[2015-12-08 13:00:00,037] INFO {org.wso2.carbon.analytics.spark.core.AnalyticsT
ask} - Executing the schedule task for: Throttle_script for tenant id: -1234
Exception in thread "dag-scheduler-event-loop" java.lang.NoClassDefFoundError: o
rg/xerial/snappy/SnappyInputStream
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:274)
at org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.sc
ala:66)
at org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.sc
ala:60)
at org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcas
t$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
at org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.s
cala:80)
at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(Torre
ntBroadcastFactory.scala:34)
at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastMan
ager.scala:62)
at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1291)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DA
GScheduler$$submitMissingTasks(DAGScheduler.scala:874)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DA
GScheduler$$submitStage(DAGScheduler.scala:815)
at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGSchedul
er.scala:799)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAG
Scheduler.scala:1426)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAG
Scheduler.scala:1418)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
Caused by: java.lang.ClassNotFoundException: org.xerial.snappy.SnappyInputStream
cannot be found by spark-core_2.10_1.4.1.wso2v1
at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(Bundl
eLoader.java:501)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.
java:421)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.
java:412)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(De
faultClassLoader.java:107)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 15 more
Am I missing something?
Can anyone please help me to figure out this issue?
Thanks in advance.

Move all the libraries(jars) into your project's /WEB-INF/lib. Now all the libraries/jars under /WEB-INF/lib will come under classpath.
use snappy-java jar file and it will work as you want.

Related

SAP Vora Thrift Server Error: Instantiating dialect 'sapsql' failed

I have deployed a cloudera CDH 5.13.1 Cluster with SAP Vora 1.4 Patch 4.
When I started the Vora thrift server everything looks fine, but as soon as I start SAP Vora tools and login following error shows up:
17/12/20 11:26:52 ERROR thriftserver.SparkExecuteStatementOperation: Error executing query, currentState RUNNING,
org.apache.spark.sql.catalyst.errors.package$DialectException: Instantiating dialect 'sapsql' failed.
Reverting to default dialect 'sapsql'
at org.apache.spark.sql.SQLContext.getSQLDialect(SQLContext.scala:225)
at org.apache.spark.sql.hive.HiveContext.getSQLDialect(HiveContext.scala:577)
at org.apache.spark.sql.hive.SapHiveContext$$anonfun$1.apply(SapHiveContext.scala:54)
at org.apache.spark.sql.hive.SapHiveContext$$anonfun$1.apply(SapHiveContext.scala:54)
at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1$$anonfun$apply$2.apply(Parsers.scala:254)
at scala.util.parsing.combinator.Parsers$Failure.append(Parsers.scala:202)
at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
at scala.util.parsing.combinator.Parsers$$anon$2$$anonfun$apply$14.apply(Parsers.scala:891)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
at scala.util.parsing.combinator.Parsers$$anon$2.apply(Parsers.scala:890)
at scala.util.parsing.combinator.PackratParsers$$anon$1.apply(PackratParsers.scala:110)
at org.apache.spark.sql.catalyst.AbstractSparkSQLParser.parse(AbstractSparkSQLParser.scala:34)
at org.apache.spark.sql.hive.SapHiveContext$$anonfun$2.apply(SapHiveContext.scala:58)
at org.apache.spark.sql.hive.SapHiveContext$$anonfun$2.apply(SapHiveContext.scala:58)
at org.apache.spark.sql.execution.datasources.DDLParser.parse(DDLParser.scala:43)
at org.apache.spark.sql.SQLContext.parseSql(SQLContext.scala:231)
at org.apache.spark.sql.hive.HiveContext.parseSql(HiveContext.scala:334)
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:829)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:211)
Caused by: java.lang.ClassNotFoundException: org.apache.spark.sql.extension.SapSQLDialect
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.spark.util.Utils$.classForName(Utils.scala:177)
at org.apache.spark.sql.SQLContext.getSQLDialect(SQLContext.scala:215)
... 54 more
In the installation guide it says I need to assign the vora user authorization for the Hive Metastore.
Since this is only a test setup authorization is disabled in Hive and the vora user can create and drop tables in the default database and has write access to Hive's warehouse location.
How can I solve it?
This issue is caused by an incompatability with CDH 5.13 and Vora 1.4 patch 4. The issue is currently being investigated by SAP.
Is it an option for you to move to a newer Vora version? Current version is Vora 2.1. Since version 2.0 Vora is deployed in a Kubernetes cluster instead of the Hadoop cluster. This could help to overcome this CDH dependency issue.

ClassCastException on Drop table query in apache spark hive

I'm using the following hive query :
this.queryExecutor.executeQuery("Drop table user")
and am getting the following exception :
java.lang.LinkageError: ClassCastException: attempting to castjar:file:/usr/hdp/2.4.2.0-258/spark/lib/spark-assembly-1.6.1.2.4.2.0-258-hadoop2.7.1.2.4.2.0-258.jar!/javax/ws/rs/ext/RuntimeDelegate.classtojar:file:/usr/hdp/2.4.2.0-258/spark/lib/spark-assembly-1.6.1.2.4.2.0-258-hadoop2.7.1.2.4.2.0-258.jar!/javax/ws/rs/ext/RuntimeDelegate.class
at javax.ws.rs.ext.RuntimeDelegate.findDelegate(RuntimeDelegate.java:116)
at javax.ws.rs.ext.RuntimeDelegate.getInstance(RuntimeDelegate.java:91)
at javax.ws.rs.core.MediaType.<clinit>(MediaType.java:44)
at com.sun.jersey.core.header.MediaTypes.<clinit>(MediaTypes.java:64)
at com.sun.jersey.core.spi.factory.MessageBodyFactory.initReaders(MessageBodyFactory.java:182)
at com.sun.jersey.core.spi.factory.MessageBodyFactory.initReaders(MessageBodyFactory.java:175)
at com.sun.jersey.core.spi.factory.MessageBodyFactory.init(MessageBodyFactory.java:162)
at com.sun.jersey.api.client.Client.init(Client.java:342)
at com.sun.jersey.api.client.Client.access$000(Client.java:118)
at com.sun.jersey.api.client.Client$1.f(Client.java:191)
at com.sun.jersey.api.client.Client$1.f(Client.java:187)
at com.sun.jersey.spi.inject.Errors.processWithErrors(Errors.java:193)
at com.sun.jersey.api.client.Client.<init>(Client.java:187)
at com.sun.jersey.api.client.Client.<init>(Client.java:170)
at org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.serviceInit(TimelineClientImpl.java:340)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.hive.ql.hooks.ATSHook.<init>(ATSHook.java:67)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at java.lang.Class.newInstance(Class.java:442)
at org.apache.hadoop.hive.ql.hooks.HookUtils.getHooks(HookUtils.java:60)
at org.apache.hadoop.hive.ql.Driver.getHooks(Driver.java:1309)
at org.apache.hadoop.hive.ql.Driver.getHooks(Driver.java:1293)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1347)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1195)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)
at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:495)
at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:484)
at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$withHiveState$1.apply(ClientWrapper.scala:290)
at org.apache.spark.sql.hive.client.ClientWrapper.liftedTree1$1(ClientWrapper.scala:237)
at org.apache.spark.sql.hive.client.ClientWrapper.retryLocked(ClientWrapper.scala:236)
at org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:279)
at org.apache.spark.sql.hive.client.ClientWrapper.runHive(ClientWrapper.scala:484)
at org.apache.spark.sql.hive.client.ClientWrapper.runSqlHive(ClientWrapper.scala:474)
at org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:613)
at org.apache.spark.sql.hive.execution.DropTable.run(commands.scala:89)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:145)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:130)
at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:52)
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:817)
at com.accenture.aa.dmah.spark.core.QueryExecutor.executeQuery(QueryExecutor.scala:35)
at com.accenture.aa.dmah.attribution.transformer.MulltipleUserJourneyTransformer.transform(MulltipleUserJourneyTransformer.scala:32)
at com.accenture.aa.dmah.attribution.userjourney.UserJourneyBuilder$$anonfun$buildUserJourney$1.apply$mcVI$sp(UserJourneyBuilder.scala:31)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at com.accenture.aa.dmah.attribution.userjourney.UserJourneyBuilder.buildUserJourney(UserJourneyBuilder.scala:29)
at com.accenture.aa.dmah.attribution.core.AttributionHub.executeAttribution(AttributionHub.scala:47)
at com.accenture.aa.dmah.attribution.jobs.AttributionJob.process(AttributionJob.scala:33)
at com.accenture.aa.dmah.core.DMAHJob.processJob(DMAHJob.scala:73)
at com.accenture.aa.dmah.core.DMAHJob.execute(DMAHJob.scala:27)
at com.accenture.aa.dmah.core.JobRunner.<init>(JobRunner.scala:17)
at com.accenture.aa.dmah.core.ApplicationInstance.initilize(ApplicationInstance.scala:48)
at com.accenture.aa.dmah.core.Bootstrap.boot(Bootstrap.scala:112)
at com.accenture.aa.dmah.core.BootstrapObj$.main(Bootstrap.scala:134)
at com.accenture.aa.dmah.core.BootstrapObj.main(Bootstrap.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at scala.tools.nsc.util.ScalaClassLoader$$anonfun$run$1.apply(ScalaClassLoader.scala:71)
at scala.tools.nsc.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)
at scala.tools.nsc.util.ScalaClassLoader$URLClassLoader.asContext(ScalaClassLoader.scala:139)
at scala.tools.nsc.util.ScalaClassLoader$class.run(ScalaClassLoader.scala:71)
at scala.tools.nsc.util.ScalaClassLoader$URLClassLoader.run(ScalaClassLoader.scala:139)
at scala.tools.nsc.CommonRunner$class.run(ObjectRunner.scala:28)
at scala.tools.nsc.ObjectRunner$.run(ObjectRunner.scala:45)
at scala.tools.nsc.CommonRunner$class.runAndCatch(ObjectRunner.scala:35)
at scala.tools.nsc.ObjectRunner$.runAndCatch(ObjectRunner.scala:45)
at scala.tools.nsc.MainGenericRunner.runTarget$1(MainGenericRunner.scala:74)
at scala.tools.nsc.MainGenericRunner.process(MainGenericRunner.scala:96)
at scala.tools.nsc.MainGenericRunner$.main(MainGenericRunner.scala:105)
at scala.tools.nsc.MainGenericRunner.main(MainGenericRunner.scala)
I saw there have been similar posts here and here but they haven't had any response till now.
Also have looked here but don't think thats a valid course of action in my case.
Whats intriguing is that this is specific when we try to use drop table (or drop table if exists) query.
Hoping to find resolution for the same.
To my knowledge, The above error could be because of sample class with same package structure ie: 'javax.ws.rs.ext.RuntimeDelegate' found in different JARs issue. Class Objects are created and casted at run time. So there is every possibility the the code responsible for triggering DROP syntax , the above class would be used and broken due as it it is found more than once in the classpath.
I have tried DROP and DROP IF EXISTS in chd5 and was working without issue, below are the details of my run:
first run - Hadoop version - 2.6,Hive 1.1.0 and Spark - 1.3.1 (included hive libraries to spark lib)
second run -Hadoop version - 2.6,Hive 1.1.0 and Spark - 1.6.1
Mode of run - cli
scala> sqlContext.sql("DROP TABLE SAMPLE");
16/08/04 11:31:39 INFO parse.ParseDriver: Parsing command: DROP TABLE SAMPLE
16/08/04 11:31:39 INFO parse.ParseDriver: Parse Completed
......
scala>sqlContext.sql("DROP TABLE IF EXISTS SAMPLE");
16/08/04 11:40:34 INFO parse.ParseDriver: Parsing command: DROP TABLE IF EXISTS SAMPLE
16/08/04 11:40:35 INFO parse.ParseDriver: Parse Completed
.....
If Possible please validate DROP commands using a different version of spark lib to narrow down problem scope.
Meanwhile, I am analyzing the jars to find out the linkage where two occurrences of same class 'RuntimeDelegate' existence and will report back to check if the removal of any jar can fix the issue and addition of the jar should recreate the same issue.

JasperReports Server "The job failed to execute"

I am new to JasperReports Server and have searched for a resolve to this issue. I have found nothing. I inherited this server and the reports are not running as scheduled.
Job: 3rd (ID: 61)
Report unit: /reports/Scheduled/00_Schedule_Primer
Quartz Job: ReportJobs.job_61
Quartz Trigger: ReportJobs.trigger_62_1
Exceptions:
An error occurred while executing the report.
com.jaspersoft.jasperserver.api.JSException: jsexception.error.creating.connection
at com.jaspersoft.jasperserver.api.engine.jasperreports.service.impl.JdbcDataSourceService.createConnection(JdbcDataSourceService.java:62)
at com.jaspersoft.jasperserver.api.engine.jasperreports.service.impl.BaseJdbcDataSource.setReportParameterValues(BaseJdbcDataSource.java:52)
at com.jaspersoft.jasperserver.api.engine.jasperreports.service.impl.JdbcDataSourceService.setReportParameterValues(JdbcDataSourceService.java:67)
at com.jaspersoft.jasperserver.api.engine.jasperreports.service.impl.EngineServiceImpl.fillReport(EngineServiceImpl.java:743)
at com.jaspersoft.jasperserver.api.engine.jasperreports.service.impl.EngineServiceImpl.fillReport(EngineServiceImpl.java:367)
at com.jaspersoft.jasperserver.api.engine.jasperreports.service.impl.EngineServiceImpl.executeReport(EngineServiceImpl.java:876)
at com.jaspersoft.jasperserver.api.engine.jasperreports.domain.impl.ReportUnitRequest.execute(ReportUnitRequest.java:60)
at com.jaspersoft.jasperserver.api.engine.jasperreports.service.impl.EngineServiceImpl.execute(EngineServiceImpl.java:301)
at com.jaspersoft.jasperserver.api.engine.scheduling.quartz.ReportExecutionJob.executeReport(ReportExecutionJob.java:444)
at com.jaspersoft.jasperserver.api.engine.scheduling.quartz.ReportExecutionJob.executeAndSendReport(ReportExecutionJob.java:372)
at com.jaspersoft.jasperserver.api.engine.scheduling.quartz.ReportExecutionJob.execute(ReportExecutionJob.java:188)
at org.quartz.core.JobRunShell.run(JobRunShell.java:195)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:520)
Caused by: java.sql.SQLException: Middleware connect fail:No RPC Connection active.
at com.ibm.u2.jdbc.UniJDBCMsgFactory.createException(UniJDBCMsgFactory.java:113)
at com.ibm.u2.jdbc.UniJDBCExceptionSupport.addAndThrowException(UniJDBCExceptionSupport.java:62)
at com.ibm.u2.jdbc.UniJDBCProtocolU2Impl.connect(UniJDBCProtocolU2Impl.java:746)
at com.ibm.u2.jdbc.UniJDBCProtocolU2Impl.executeOpenDatabase(UniJDBCProtocolU2Impl.java:243)
at com.ibm.u2.jdbc.UniJDBCConnectionImpl.<init>(UniJDBCConnectionImpl.java:116)
at com.ibm.u2.jdbc.UniJDBCDriver.connect(UniJDBCDriver.java:111)
at java.sql.DriverManager.getConnection(DriverManager.java:582)
at java.sql.DriverManager.getConnection(DriverManager.java:185)
at org.apache.commons.dbcp.DriverManagerConnectionFactory.createConnection(DriverManagerConnectionFactory.java:48)
at org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:290)
at org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:771)
at org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:95)
at com.jaspersoft.jasperserver.api.engine.jasperreports.service.impl.JdbcDataSourceService.createConnection(JdbcDataSourceService.java:58)
... 12 more
I don't even have a clue where to start troubleshooting this. ANY help would be awesome!
It seems your connection url, username and password values are not proper.If these three things are proper then the connection will be created.Check in your Jasper server configuration.

Log Oracle SQL statements with Squeryl and Play 2

I am trying to log the SQL produced by Squeryl in a Play 2 application, for debugging purposes. I am using this with the following Oracle logging properties:
.level=SEVERE
oracle.jdbc.level=FINE
oracle.jdbc.handlers=java.util.logging.ConsoleHandler
java.util.logging.ConsoleHandler.level=ALL
java.util.logging.ConsoleHandler.formatter=java.util.logging.SimpleFormatter
oracle.net.ns.level=FINEST
oracle.net.ns.handlers=java.util.logging.ConsoleHandler
This has worked for me before in a non-Play application with the same Oracle driver jar, but in a Play application, the JUL-to-SLF4J bridge seems to be causing a problem:
Oops, cannot start the server.
Configuration error: Configuration error[Cannot connect to database [default]]
at play.api.Configuration$.play$api$Configuration$$configError(Configuration.scala:92)
at play.api.Configuration.reportError(Configuration.scala:570)
at play.api.db.BoneCPPlugin$$anonfun$onStart$1.apply(DB.scala:252)
at play.api.db.BoneCPPlugin$$anonfun$onStart$1.apply(DB.scala:243)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.immutable.List.foreach(List.scala:318)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at play.api.db.BoneCPPlugin.onStart(DB.scala:243)
at play.api.Play$$anonfun$start$1$$anonfun$apply$mcV$sp$1.apply(Play.scala:88)
at play.api.Play$$anonfun$start$1$$anonfun$apply$mcV$sp$1.apply(Play.scala:88)
at scala.collection.immutable.List.foreach(List.scala:318)
at play.api.Play$$anonfun$start$1.apply$mcV$sp(Play.scala:88)
at play.api.Play$$anonfun$start$1.apply(Play.scala:88)
at play.api.Play$$anonfun$start$1.apply(Play.scala:88)
at play.utils.Threads$.withContextClassLoader(Threads.scala:18)
at play.api.Play$.start(Play.scala:87)
at play.core.StaticApplication.<init>(ApplicationProvider.scala:52)
at play.core.server.NettyServer$.createServer(NettyServer.scala:243)
at play.core.server.NettyServer$$anonfun$main$3.apply(NettyServer.scala:279)
at play.core.server.NettyServer$$anonfun$main$3.apply(NettyServer.scala:274)
at scala.Option.map(Option.scala:145)
at play.core.server.NettyServer$.main(NettyServer.scala:274)
at play.core.server.NettyServer.main(NettyServer.scala)
Caused by: java.lang.IllegalArgumentException: can't parse argument number 18=false
at java.text.MessageFormat.makeFormat(MessageFormat.java:1339)
at java.text.MessageFormat.applyPattern(MessageFormat.java:458)
at java.text.MessageFormat.<init>(MessageFormat.java:350)
at java.text.MessageFormat.format(MessageFormat.java:811)
at org.slf4j.bridge.SLF4JBridgeHandler.getMessageI18N(SLF4JBridgeHandler.java:268)
at org.slf4j.bridge.SLF4JBridgeHandler.callLocationAwareLogger(SLF4JBridgeHandler.java:223)
at org.slf4j.bridge.SLF4JBridgeHandler.publish(SLF4JBridgeHandler.java:301)
at java.util.logging.Logger.log(Logger.java:481)
at java.util.logging.Logger.doLog(Logger.java:503)
at java.util.logging.Logger.log(Logger.java:547)
at oracle.net.ns.NSProtocol.establishConnection(NSProtocol.java:919)
at oracle.net.ns.NSProtocol.connect(NSProtocol.java:267)
at oracle.jdbc.driver.T4CConnection.connect(T4CConnection.java:1625)
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:365)
at oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:557)
at oracle.jdbc.driver.T4CConnection.<init>(T4CConnection.java:233)
at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:29)
at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:556)
at java.sql.DriverManager.getConnection(DriverManager.java:582)
at java.sql.DriverManager.getConnection(DriverManager.java:185)
at com.jolbox.bonecp.BoneCP.obtainRawInternalConnection(BoneCP.java:351)
at com.jolbox.bonecp.BoneCP.<init>(BoneCP.java:416)
at com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:120)
at play.api.db.BoneCPPlugin$$anonfun$onStart$1.apply(DB.scala:245)
... 22 more
I tried simply removing the JUL to SLF4J bridge jar from my deployed application, but Play refuses to start if that jar isn't present, so that didn't work.
I obviously don't need to use this particular approach, I just want some way to log the SQL selects being executed (preferably without admin access to the Oracle server).
I just needed to change the oracle.net.ns.level to SEVERE. Logging of oracle.net is only needed if you want to log the network packets being sent to and from the server, which I didn't need in this case.

The "Spring XD" xd-shell can't run the hadoop fs ls command, the command returns a java exception

I compiled the latest spring-xd as I needed CDH support. I am able to start the server however when I connect to the server via the xd-shell I try to change a "configuration". Also this is a kerberized cluster, I am not sure how xd will/can handle that.
1st scenario:
admin config server --uri http://testdomain:10111
hadoop config fs --namenode hdfs://nameservice1:8020
hadoop config props set hadoop.security.group.mapping=org.apache.hadoop.security.ShellBasedUnixGroupsMapping
hadoop config props load hadoop.security.group.mapping
hadoop fs ls
Error message:
xd:>hadoop fs ls
-ls: Fatal internal error
java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:128)
at org.apache.hadoop.security.Groups.<init>(Groups.java:55)
at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:182)
at org.apache.hadoop.security.UserGroupInformation.initUGI(UserGroupInformation.java:252)
at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:223)
at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:214)
at org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled(UserGroupInformation.java:277)
at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:668)
at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:573)
at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2428)
at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2420)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2288)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:316)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:162)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:300)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:194)
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:270)
at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:224)
at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:207)
at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:190)
at org.apache.hadoop.fs.shell.Command.run(Command.java:154)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:254)
at org.springframework.xd.shell.hadoop.FsShellCommands.run(FsShellCommands.java:412)
at org.springframework.xd.shell.hadoop.FsShellCommands.runCommand(FsShellCommands.java:407)
at org.springframework.xd.shell.hadoop.FsShellCommands.ls(FsShellCommands.java:110)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.springframework.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:191)
at org.springframework.shell.core.SimpleExecutionStrategy.invoke(SimpleExecutionStrategy.java:64)
at org.springframework.shell.core.SimpleExecutionStrategy.execute(SimpleExecutionStrategy.java:48)
at org.springframework.shell.core.AbstractShell.executeCommand(AbstractShell.java:127)
at org.springframework.shell.core.JLineShell.promptLoop(JLineShell.java:483)
at org.springframework.shell.core.JLineShell.run(JLineShell.java:157)
at java.lang.Thread.run(Thread.java:679)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:532)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:126)
... 35 more
Caused by: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.security.JniBasedUnixGroupsMapping
at org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.<init>(JniBasedUnixGroupsMappingWithFallback.java:38)
... 40 more
2nd scenario
alternatively I remove some java opts
run steps 1, 2 from previous scenario
then
hadoop config props set hadoop.security.authorization=true
hadoop config props set hadoop.security.authentication=kerberos
error below
16:50:29,682 WARN Spring Shell util.NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
ls: Authorization (hadoop.security.authorization) is enabled but authentication (hadoop.security.authentication) is configured as simple. Please configure another method like kerberos or digest.
Thanks for you assistance - can't wait to get this working!
Thanks for raising this - we haven't tested with authorization/authentication in the shell for a while - though it is tested as part of the project https://github.com/vmware-serengeti/serengeti-ws
Are you able to perform operations using the standard hadoop file system shell. e.g.
hdfs dfs -ls /user/hadoop/file1
There is currently no specific support in XD for running against a secured Hadoop cluster.
Feel free to open a JIRA ticket at https://jira.springsource.org/browse/XD -- this is something we know we will have to address soon.

Resources