Trying to run a grails 3 application with jdbc6 dependency. I'm trying to import the below libraries in my groovy service that is supposed to connect to an Oracle database to call a stored procedure.
import oracle.sql.ARRAY
import oracle.sql.ArrayDescriptor
import oracle.jdbc.OracleCallableStatement
import java.sql.Connection
import groovy.sql.Sql
import org.apache.poi.ss.usermodel.Workbook
import org.apache.poi.ss.usermodel.WorkbookFactory
import org.apache.poi.ss.usermodel.Sheet
import org.apache.poi.ss.usermodel.Cell
import org.apache.poi.ss.usermodel.Row
import org.apache.poi.ss.usermodel.DataFormatter
import com.wwt.itemuploadapi.rectypes.Rectype
import java.sql.SQLException
class ExcelService {
def dataSource
private static final FILE_HEADERS = [
'First Name': 'firstName',
'Last Name': 'lastName'
]
def callApi(List<Rectype> rectype) {
OracleCallableStatement callableStmt = null
try {
def conn = dataSource.getConnection()
ArrayDescriptor descriptor = ArrayDescriptor.createDescriptor("TBLTYPE", conn.unwrap(oracle.jdbc.OracleConnection.class))
ARRAY dataElementsArray = new ARRAY(descriptor, conn.unwrap(oracle.jdbc.OracleConnection.class), (Object[])rectype.toArray())
Map map = conn.getTypeMap()
map.put("REC_TYPE", Rectype.class)
callableStmt = (OracleCallableStatement)conn.prepareCall("{call package.procedure_name(?)}")
callableStmt.setArray(1, dataElementsArray);
callableStmt.execute()
}
catch (SQLException ex) {
println(ex)
}
}
I get the below three errors upon starting up. But I have these classes under my Gradle: com.oracle:ojdbc6:11.2.0.3 library. So I'm not sure why it can't recognize them.
`unable to resolve class oracle.sql.ARRAY`
`unable to resolve class oracle.sql.ArrayDescriptor`
`unable to resolve class oracle.jdbc.OracleCallableStatement`
Any suggestions why these classes can't be found?
oracle.jdbc.OracleCallableStatement
is not the correct class to use anymore for the ojdbc dependency version you are using.
That should be updated to this import and class:
import java.sql.CallableStatement
CallableStatement callableStmt = null
Here is a link which will show you what you need to do to replace the other deprecated classes (oracle.sql.ARRAY and oracle.sql.ArrayDescriptor) you are trying to use:
https://docs.oracle.com/database/121/JAJDB/deprecated-list.html#class
Related
import com.expediagroup.api.database.OrderDomain
import com.expediagroup.api.database.OrderDomainRepository
import io.r2dbc.pool.ConnectionPool
import io.r2dbc.spi.ConnectionFactory
import org.assertj.core.api.Assertions.assertThat
import org.junit.jupiter.api.Test
import org.junit.jupiter.api.TestInstance
import org.springframework.beans.factory.annotation.Autowired
import java.io.IOException
import java.sql.Date
import java.util.Arrays
import java.util.function.Consumer
import org.junit.jupiter.api.extension.ExtendWith
import org.springframework.r2dbc.core.DatabaseClient
import org.springframework.test.context.junit.jupiter.SpringExtension
import reactor.test.StepVerifier
import reactor.core.publisher.Hooks
import org.junit.jupiter.api.BeforeEach
import org.springframework.boot.test.autoconfigure.data.r2dbc.DataR2dbcTest
#ExtendWith(SpringExtension::class)
#TestInstance(TestInstance.Lifecycle.PER_METHOD)
#DataR2dbcTest
class R2dbcTemplateIT {
#Autowired
var orderDomain: OrderDomainRepository? = null
#Autowired
var database: DatabaseClient? = null
#ClassRule
var mysql: MySQLContainer<?> = MySQLContainer<>("mysql:5.5")
.withDatabaseName("test")
.withUsername("test")
.withPassword("test")
#BeforeEach
fun setUp() {
Hooks.onOperatorDebug()
mysql.start()
val statements: List<String> = Arrays.asList( //
"DROP TABLE IF EXISTS customer;",
"CREATE TABLE customer ( id SERIAL PRIMARY KEY, firstname VARCHAR(100) NOT NULL, lastname VARCHAR(100) NOT NULL);"
)
statements.forEach(Consumer { it: String? ->
database!!.sql(it!!) //
.fetch() //
.rowsUpdated() //
.`as`(StepVerifier::create)
.expectNextCount(1) //
.verifyComplete()
})
}
#Test
#Throws(IOException::class)
fun generatesIdOnInsert() {
val domainMetadata = customer(1L, "John", "Smith")
orderDomain?.save(domainMetadata) //
?.`as`(StepVerifier::create) //
?.assertNext { actual ->
assertThat(domainMetadata.id).isNull() // immutable before save
assertThat(actual.id).isNotNull() // after save
}?.verifyComplete()
}
}
I am trying to run an integration test on R2DBC using R2DBCRepositories to test our a few things. I was wondering I have this except for the fact of not having any local database running hence
Does anyone know have recommendation on setting up the DB within this test as well?
Turns out it is an issue with Kotlin not liking the way it was set up in Java.
https://github.com/testcontainers/testcontainers-java/issues/318
After finishing some integration tests I found that my expected H2 database files did not exist.
With a url of "jdbc:h2:/tmp/casper" I expected to have a /tmp/casper.mv.db file however there was none.
The reason is that while initializing the database I used "drop all objects delete files" After all my work, it disappeared after the test when the datasource was closed.
Demonstration in my answer to this question.
package org.javautil.h2;
import static org.junit.Assert.assertFalse;
import static org.junit.Assert.assertTrue;
import java.io.File;
import java.sql.Connection;
import java.sql.Statement;
import org.junit.Test;
import com.zaxxer.hikari.HikariConfig;
import com.zaxxer.hikari.HikariDataSource;
public class H2DropAllObjectsTest {
#Test
public void casper() throws Exception {
final HikariConfig config = new HikariConfig();
config.setJdbcUrl("jdbc:h2:/tmp/casper");
config.setUsername("sr");
config.setPassword("tutorial");
config.setAutoCommit(true);
HikariDataSource dataSource = new HikariDataSource(config);
Connection connection = dataSource.getConnection();
File f = new File("/tmp/casper.mv.db");
assertTrue (f.exists());
Statement s = connection.createStatement();
s.execute("drop all objects delete files");
assertTrue (f.exists());
s.execute("create table a (b number(9))");
/* do a lot of work */
connection.commit();
s.close();
connection.close();
assertTrue (f.exists());
dataSource.close();
assertFalse (f.exists());
}
}
Just handle your database as update don't use create/drop
dataSource:
dbCreate: update
When I read from hbase using richfatMapFunction inside a map I am getting serialization error. What I am trying to do is if a datastream equals to a particular string read from hbase else ignore. Below is the sample program and error I am getting.
package com.abb.Flinktest
import java.text.SimpleDateFormat
import java.util.Properties
import scala.collection.concurrent.TrieMap
import org.apache.flink.addons.hbase.TableInputFormat
import org.apache.flink.api.common.functions.RichFlatMapFunction
import org.apache.flink.api.common.io.OutputFormat
import org.apache.flink.api.java.tuple.Tuple2
import org.apache.flink.streaming.api.scala.DataStream
import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment
import org.apache.flink.streaming.api.scala.createTypeInformation
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer08
import org.apache.flink.streaming.util.serialization.SimpleStringSchema
import org.apache.flink.util.Collector
import org.apache.hadoop.hbase.HBaseConfiguration
import org.apache.hadoop.hbase.TableName
import org.apache.hadoop.hbase.client.ConnectionFactory
import org.apache.hadoop.hbase.client.HTable
import org.apache.hadoop.hbase.client.Put
import org.apache.hadoop.hbase.client.Result
import org.apache.hadoop.hbase.client.Scan
import org.apache.hadoop.hbase.filter.BinaryComparator
import org.apache.hadoop.hbase.filter.CompareFilter
import org.apache.hadoop.hbase.filter.SingleColumnValueFilter
import org.apache.hadoop.hbase.util.Bytes
import org.apache.log4j.Level
import org.apache.flink.api.common.functions.RichMapFunction
object Flinktesthbaseread {
def main(args:Array[String])
{
val env = StreamExecutionEnvironment.createLocalEnvironment()
val kafkaStream = env.fromElements("hello")
val c=kafkaStream.map(x => if(x.equals("hello"))kafkaStream.flatMap(new ReadHbase()) )
env.execute()
}
class ReadHbase extends RichFlatMapFunction[String,Tuple11[String,String,String,String,String,String,String,String,String,String,String]] with Serializable
{
var conf: org.apache.hadoop.conf.Configuration = null;
var table: org.apache.hadoop.hbase.client.HTable = null;
var hbaseconnection:org.apache.hadoop.hbase.client.Connection =null
var taskNumber: String = null;
var rowNumber = 0;
val serialVersionUID = 1L;
override def open(parameters: org.apache.flink.configuration.Configuration) {
println("getting table")
conf = HBaseConfiguration.create()
val in = getClass().getResourceAsStream("/hbase-site.xml")
conf.addResource(in)
hbaseconnection = ConnectionFactory.createConnection(conf)
table = new HTable(conf, "testtable");
// this.taskNumber = String.valueOf(taskNumber);
}
override def flatMap(msg:String,out:Collector[Tuple11[String,String,String,String,String,String,String,String,String,String,String]])
{
//flatmap operation here
}
override def close() {
table.flushCommits();
table.close();
}
}
}
Error:
log4j:WARN No appenders could be found for logger (org.apache.flink.api.scala.ClosureCleaner$).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" org.apache.flink.api.common.InvalidProgramException: Task not serializable
at org.apache.flink.api.scala.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:172)
at org.apache.flink.api.scala.ClosureCleaner$.clean(ClosureCleaner.scala:164)
at org.apache.flink.streaming.api.scala.StreamExecutionEnvironment.scalaClean(StreamExecutionEnvironment.scala:617)
at org.apache.flink.streaming.api.scala.DataStream.clean(DataStream.scala:959)
at org.apache.flink.streaming.api.scala.DataStream.map(DataStream.scala:484)
at com.abb.Flinktest.Flinktesthbaseread$.main(Flinktesthbaseread.scala:45)
at com.abb.Flinktest.Flinktesthbaseread.main(Flinktesthbaseread.scala)
Caused by: java.io.NotSerializableException: org.apache.flink.streaming.api.scala.DataStream
- field (class "com.abb.Flinktest.Flinktesthbaseread$$anonfun$1", name: "kafkaStream$1", type: "class org.apache.flink.streaming.api.scala.DataStream")
- root object (class "com.abb.Flinktest.Flinktesthbaseread$$anonfun$1", <function1>)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1182)
at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
at org.apache.flink.util.InstantiationUtil.serializeObject(InstantiationUtil.java:301)
at org.apache.flink.api.scala.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:170)
... 6 more
I tried wrapping the field inside a method and a class by making the class serializable as wel, but no luck. Could someone throw some lights on this or suggest some workaround for this.
The problem is that you're trying to access the kafka stream variable in the map function which is simply not serializable. It is just an abstract representation of the data. It doesn't contain anything, which invalidates your function in the first place.
instead, do something like this:
kafkaStream.filter(x => x.equals("hello")).flatMap(new ReadHBase())
The filter funtion will only retain the elements for which the condition is true, and those will be passed to your flatMap function.
I would highly recommend you to read the basis API concepts documentation, as there appears to be some misunderstanding as to what actually happens when you specify a transformation.
I am working on programming to process data from Apache kafka to elasticsearch. For that purpose I am using Apache Spark. I have gone through many link but unable to find example to write data from JavaDStream in Apache spark to elasticsearch.
Below is sample code of spark which gets data from kafka and prints it.
import org.apache.log4j.Logger;
import org.apache.log4j.Level;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Arrays;
import java.util.Iterator;
import java.util.Map;
import java.util.Set;
import java.util.regex.Pattern;
import scala.Tuple2;
import kafka.serializer.StringDecoder;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.function.*;
import org.apache.spark.streaming.api.java.*;
import org.apache.spark.streaming.kafka.KafkaUtils;
import org.apache.spark.streaming.Durations;
import org.elasticsearch.spark.rdd.api.java.JavaEsSpark;
import com.google.common.collect.ImmutableMap;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import java.util.List;
public class SparkStream {
public static JavaSparkContext sc;
public static List<Map<String, ?>> alldocs;
public static void main(String args[])
{
if(args.length != 2)
{
System.out.println("SparkStream <broker1-host:port,broker2-host:port><topic1,topic2,...>");
System.exit(1);
}
Logger.getLogger("org").setLevel(Level.OFF);
Logger.getLogger("akka").setLevel(Level.OFF);
SparkConf sparkConf=new SparkConf().setAppName("Data Streaming");
sparkConf.setMaster("local[2]");
sparkConf.set("es.index.auto.create", "true");
sparkConf.set("es.nodes","localhost");
sparkConf.set("es.port","9200");
JavaStreamingContext jssc = new JavaStreamingContext(sparkConf, Durations.seconds(2));
Set<String> topicsSet=new HashSet<>(Arrays.asList(args[1].split(",")));
Map<String,String> kafkaParams=new HashMap<>();
String brokers=args[0];
kafkaParams.put("metadata.broker.list",brokers);
kafkaParams.put("auto.offset.reset", "largest");
kafkaParams.put("offsets.storage", "zookeeper");
JavaPairDStream<String, String> messages=KafkaUtils.createDirectStream(
jssc,
String.class,
String.class,
StringDecoder.class,
StringDecoder.class,
kafkaParams,
topicsSet
);
JavaDStream<String> lines = messages.map(new Function<Tuple2<String, String>, String>() {
#Override
public String call(Tuple2<String, String> tuple2) {
return tuple2._2();
}
});
lines.print();
jssc.start();
jssc.awaitTermination();
}
}
`
One method to save to elastic search is using the saveToEs method inside a foreachRDD function. Any other method you wish to use would still require the foreachRDD call to your dstream.
For example:
lines.foreachRDD(lambda rdd: rdd.saveToEs("ESresource"))
See here for more
dstream.foreachRDD{rdd=>
val es = sqlContext.createDataFrame(rdd).toDF("use headings suitable for your dataset")
import org.elasticsearch.spark.sql._
es.saveToEs("wordcount/testing")
es.show()
}
In this code block "dstream" is the data stream which observe data from server like kafka. Inside brackets of "toDF()" you have to use headings. In "saveToES()" you have use elasticsearch index. Before this you have create SQLContext.
val sqlContext = SQLContext.getOrCreate(SparkContext.getOrCreate())
If you are using kafka to send data you have to add dependency mentioned below
libraryDependencies += "org.apache.kafka" % "kafka-clients" % "0.10.2.1"
Get the dependency
To see full example see
In this example first you have to create kafka producer "test" then start elasticsearch
After run the program. You can see full sbt and code using above url.
I'm trying to deploy my application on jboss-portal-2.7.2, and it is giving me the following error:
04:17:23,879 INFO [DispatcherPortlet] FrameworkPortlet 'my_portlet': initialization started
04:17:23,882 ERROR [LifeCycle] Cannot start object
org.jboss.portal.portlet.container.PortletInitializationException: The portlet my_portlet threw an error during init
at org.jboss.portal.portlet.impl.jsr168.PortletContainerImpl.start(PortletContainerImpl.java:292)
at org.jboss.portal.portlet.impl.container.PortletContainerLifeCycle.invokeStart(PortletContainerLifeCycle.java:76)
at org.jboss.portal.portlet.impl.container.LifeCycle.managedStart(LifeCycle.java:92)
at org.jboss.portal.portlet.impl.container.PortletApplicationLifeCycle.startDependents(PortletApplicationLifeCycle.java:351)
...
at org.jboss.deployment.scanner.URLDeploymentScanner.deploy(URLDeploymentScanner.java:421)
at org.jboss.deployment.scanner.URLDeploymentScanner.scan(URLDeploymentScanner.java:610)
at org.jboss.deployment.scanner.AbstractDeploymentScanner$ScannerThread.doScan(AbstractDeploymentScanner.java:263)
at org.jboss.deployment.scanner.AbstractDeploymentScanner$ScannerThread.loop(AbstractDeploymentScanner.java:274)
at org.jboss.deployment.scanner.AbstractDeploymentScanner$ScannerThread.run(AbstractDeploymentScanner.java:225)
Caused by: java.lang.NoSuchMethodError: org.springframework.web.portlet.context.ConfigurablePortletApplicationContext.setId(Ljava/lang/String;)V
at org.springframework.web.portlet.FrameworkPortlet.createPortletApplicationContext(FrameworkPortlet.java:345)
at org.springframework.web.portlet.FrameworkPortlet.initPortletApplicationContext(FrameworkPortlet.java:294)
at org.springframework.web.portlet.FrameworkPortlet.initPortletBean(FrameworkPortlet.java:268)
at org.springframework.web.portlet.GenericPortletBean.init(GenericPortletBean.java:116)
at javax.portlet.GenericPortlet.init(GenericPortlet.java:107)
at org.jboss.portal.portlet.impl.jsr168.PortletContainerImpl.initPortlet(PortletContainerImpl.java:417)
at org.jboss.portal.portlet.impl.jsr168.PortletContainerImpl.start(PortletContainerImpl.java:256)
... 76 more
Does anyone know how to fix this?
MainController:
package myportlet.spring.controller;
import org.springframework.stereotype.Controller;
import org.springframework.ui.Model;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.portlet.bind.annotation.RenderMapping;
import org.apache.commons.lang.StringUtils;
import javax.portlet.PortletPreferences;
import javax.portlet.PortletRequest;
import javax.portlet.PortletSession;
import java.util.LinkedList;
import java.util.List;
#RequestMapping(value = "VIEW")
#Controller(value = "mainController")
public class MainController {
#RenderMapping
public String init(#RequestParam(value = "key", required = false) String key, Model model, PortletRequest request) throws Exception {
PortletSession session = request.getPortletSession();
/* Get Key from Portlet Preferences */
PortletPreferences preferences = request.getPreferences();
String preferencesKey = preferences.getValue(constants.KEY, constants.FACTORY);
if(StringUtils.isEmpty(key)) {
key = preferencesKey;
}
/* Save current KEY into session */
session.setAttribute(constants.KEY, key, PortletSession.APPLICATION_SCOPE);
model.addAttribute("entityList", getEntities());
model.addAttribute("preferencesKey", preferencesKey);
return "index";
}
}
pom.xml
Caused by: java.lang.NoSuchMethodError: org.springframework.web.portlet.context.ConfigurablePortletApplicationContext.setId(Ljava/lang/String;)V
It sounds like you are using a old version of spring-context.jar Spring Context API - Version 2.0.x.
Try using a later version Spring Context API - Version 3.0.x.