ORA-01460: unimplemented or unreasonable - oracle

I try to run this query in oracle database but unfortunately I receive this error please help me :(
java.sql.SQLException: ORA-01460: unimplemented or unreasonable conversion requested
right now that problem solved and I have an other exception:
I change this line
pstmt.setBinaryStream(7, fis, (int) file.length());
with
pstmt.setBinaryStream(7, fis, (long) file.length());
Exception in thread "AWT-EventQueue-0" java.lang.AbstractMethodError: oracle.jdbc.driver.OraclePreparedStatement.setBinaryStream(ILjava/io/InputStream;J)V
for text file there is no issue, but when I try to upload a JPG file I receive this error.
PreparedStatement pstmt =
conn.prepareStatement("INSERT INTO PM_OBJECT_TABLE( " +
"N_ACTIVITY_ID, V_NAME,N_SIZE, D_MODIFY,N_CATEGORY, N_NODE_ID ,O_OBJECT) " +
" VALUES ( ? , ? , ? , ? , ? , ? ,?)");
pstmt.setLong(1, N_ACTIVITY_ID);
pstmt.setString(2, file.getName());
pstmt.setLong(3, file.length());
java.util.Date date = new java.util.Date();
java.sql.Date sqlDate = new java.sql.Date(date.getTime());
pstmt.setDate(4,sqlDate);
pstmt.setInt(5, N_CATEGORY);
pstmt.setLong(6, N_NODE_ID);
pstmt.setBinaryStream(7, fis, (int) file.length());
pstmt.executeUpdate();

java.lang.AbstractMethodError: com.mysql.jdbc.ServerPreparedStatement.setBinaryStream(ILjava/io/InputStream;J)V
To fix this problem you need to change the call to setBinaryStream so the last parameter is passed as an integer instead of a long.
i found the quote in a blog during facing the same problem
like the above PreparedStatement.setBinaryStream() has THREE overloading methods
and we should use setBinaryStream(columnIndex, InputStream, (((((((((INT)))))))
OTHERWISE, that may cause an error

I also experienced this issue with code that was working and then I got this error suddenly.
I am running Netbeans 8.0.2 with Glassfish 3
In the GlassFish\Glassfish\libs folder I had 2 ojdbc files ojdbc6.jar and ojdbc14.jar
It seems that even though ojdbc6 was included into the project libraries ojdbc14 was also loaded in.
I stopped glassfish renamed ojdbc14.jar to ojdbc14.jar.bak then clean and build and deployed the project.
Problem fixed.

I solve my problem by one of previous suggestion:
public String insertBineryToDB(long N_ACTIVITY_ID,int N_CATEGORY,long N_NODE_ID ,FileInputStream fis , java.io.File file) {
Statement statement;
try {
//conn.close();
// N_ACTIVITY_ID, V_NAME,N_SIZE, D_MODIFY,N_CATEGORY, N_NODE_ID ,O_OBJECT
PreparedStatement pstmt =
conn.prepareStatement("INSERT INTO PM_OBJECT_TABLE( " +
"N_ACTIVITY_ID, V_NAME,N_SIZE, D_MODIFY,N_CATEGORY, N_NODE_ID ,O_OBJECT) " +
" VALUES ( ? , ? , ? , ? , ? , ? ,empty_blob())");
InputStream bodyIn = fis;
pstmt.setLong(1, N_ACTIVITY_ID);
pstmt.setString(2, file.getName());
pstmt.setLong(3, file.length());
java.util.Date date = new java.util.Date();
java.sql.Date sqlDate = new java.sql.Date(date.getTime());
pstmt.setDate(4,sqlDate);
pstmt.setInt(5, N_CATEGORY);
pstmt.setLong(6, N_NODE_ID);
//pstmt.setBinaryStream(7, bodyIn,(int) file.length());
pstmt.executeUpdate();
conn.commit();
PreparedStatement stmt2 = conn.prepareStatement(" select O_OBJECT from PM_OBJECT_TABLE where N_ACTIVITY_ID = ? for update ");
stmt2.setLong(1, N_ACTIVITY_ID);
ResultSet rset = stmt2.executeQuery();
FileInputStream inputFileInputStream = new FileInputStream(file);
rset.next();
BLOB image = ((OracleResultSet) rset).getBLOB("O_OBJECT");
int bufferSize;
byte[] byteBuffer;
int bytesRead = 0;
int bytesWritten = 0;
int totBytesRead = 0;
int totBytesWritten = 0;
bufferSize = image.getBufferSize();
byteBuffer = new byte[bufferSize];
OutputStream blobOutputStream = image.getBinaryOutputStream();
while ((bytesRead = inputFileInputStream.read(byteBuffer)) != -1) {
// After reading a buffer from the binary file, write the contents
// of the buffer to the output stream using the write()
// method.
blobOutputStream.write(byteBuffer, 0, bytesRead);
totBytesRead += bytesRead;
totBytesWritten += bytesRead;
}
inputFileInputStream.close();
blobOutputStream.close();
conn.commit();
rset.close();
stmt2.close();
String output = "Wrote file " + file.getName() + " to BLOB column." +
totBytesRead + " bytes read." +
totBytesWritten + " bytes written.\n";
return output;
} catch (Exception e) {
e.printStackTrace();
return "Wrote file " + file.getName() + " to BLOB column failed." ;
}
}

Use java.sql.PreparedStatement.setBinaryStream(int parameterIndex, InputStream x) -- 2 parameters, not 3.

Related

jpos : How to pack DE 55

I receive data as below for DE55 in hex
<field id="55" value="3546324130323038343038323032353830303935303530303030303030303030394130333032313031313943303130303946303230363030303030303030323130313946313030383031303130334130303030304441433139463141303230383430394632363038393044324530373242333534463233413946323730313830394633363032303030353946333730343132333435363738" type="binary"/>
The other end of the system expects the values as below in binary format
where 303736
3037365F2A02084082025800950500000000009A030210119C01009F02060000000021019F1008010103A00000DAC19F1A0208409F260890D2E072B354F23A9F2701809F360200059F370412345678
<field id="55" value="5F2A02084082025800950500000000009A030210119C01009F02060000000021019F1008010103A00000DAC19F1A0208409F260890D2E072B354F23A9F2701809F360200059F370412345678" type="binary"/>
my packager settings I am using is below , is there any class (field packager) available in jpos which will give my desired output or do I have to create a new custom field packager.
when I use the below , value is send as received in the request message
<isofield
id="55"
length="999"
name="INTEGRATED CIRCUIT CARD (ICC) SYSTEM-RELATED DATA"
class="org.jpos.iso.IFA_LLLBINARY"/>
please advise what field packager can I use for DE55 to get the desired.
Incase if I want to create new custom field packager what should I be doing ?
thanks in advance
I created a custom FieldPackager as below which pass bytes of array to the packager
public byte[] pack(ISOComponent c) throws ISOException
{
try
{
byte[] data1 = c.getBytes();
String de55_received = ISOUtil.hexString(data1);
byte[] de55_orig_value = DatatypeConverter.parseHexBinary(de55_received);
String de55_value = new String(de55_orig_value);
int de55length = de55_value.length();
byte[] de55_data = new byte[de55length / 2];
for (int i = 0; i < de55length; i += 2) {
de55_data[i / 2] = (byte) ((Character.digit(de55_value.charAt(i), 16) << 4)
+ Character.digit(de55_value.charAt(i+1), 16));
}
System.out.println(de55_data);
System.out.println(DatatypeConverter.printHexBinary(de55_data));
byte[] data = de55_data;
int packedLength = prefixer.getPackedLength();
if (packedLength == 0 && data.length != getLength()) {
throw new ISOException("Binary data length not the same as the packager length (" + data.length + "/" + getLength() + ")");
}
byte[] ret = new byte[interpreter.getPackedLength(data.length) + packedLength];
prefixer.encodeLength(data.length, ret);
interpreter.interpret(data, ret, packedLength);
return ret;
} catch(Exception e) {
throw new ISOException(makeExceptionMessage(c, "packing"), e);
}

How to write the data from Mysql into a file using jdbc code and file writer?

String selectTableSQL = "select JobID, MetadataJson from raasjobs join metadata using (JobID) where JobCreatedDate > '2014-07-01';";
File file = new File("/users/t_shetd/file.txt");
try {
dbConnection = getDBConnection();
statement = dbConnection.createStatement();
System.out.println(selectTableSQL);
// execute select SQL stetement
ResultSet rs = statement.executeQuery(selectTableSQL);
if (!file.exists()) {
file.createNewFile();
}
FileWriter fw = new FileWriter(file.getAbsoluteFile());
BufferedWriter bw = new BufferedWriter(fw);
while (rs.next()) {
String JobID = rs.getString("JobID");
String Metadata = rs.getString("MetadataJson");
bw.write(selectTableSQL);
bw.close();
System.out.println("Done");
// Now i am only getting the output done
If I understand your question, then this
while (rs.next()) {
String JobID = rs.getString("JobID");
String Metadata = rs.getString("MetadataJson");
bw.write(selectTableSQL);
bw.close();
System.out.println("Done");
}
Should be something like (following Java capitalization conventions),
while (rs.next()) {
String jobId = rs.getString("JobID");
String metaData = rs.getString("MetadataJson");
bw.write(String.format("Job ID: %s, MetaData: %s", jobId, metaData));
}
bw.close(); // <-- finish writing first!
System.out.println("Done");
In your version, you close the output after printing the first line from the ResultSet. After that, nothing else will write (because the File is closed).

Elasticsearch Indexing by BulkRequestBuilder getting slow down

Hi all elasticsearch masters.
I have millions of data to be indexed by elasticsearch Java API.
The number of cluster nodes for elasticsearch are three (1 as master + 2 nodes).
My code snippet is below.
Settings settings = ImmutableSettings.settingsBuilder()
.put("cluster.name", "MyClusterName").build();
TransportClient client = new TransportClient(settings);
String hostname = "myhost ip";
int port = 9300;
client.addTransportAddress(new InetSocketTransportAddress(hostname, port));
BulkRequestBuilder bulkBuilder = client.prepareBulk();
BufferedReader br = new BufferedReader(new InputStreamReader(new DataInputStream(new FileInputStream("my_file_path"))));
long bulkBuilderLength = 0;
String readLine = "";
String index = "my_index_name";
String type = "my_type_name";
String id = "";
while((readLine = br.readLine()) != null){
id = somefunction(readLine);
String json = new ObjectMapper().writeValueAsString(readLine);
bulkBuilder.add(client.prepareIndex(index, type, id)
.setSource(json));
bulkBuilderLength++;
if(bulkBuilderLength % 1000== 0){
logger.info("##### " + bulkBuilderLength + " data indexed.");
BulkResponse bulkRes = bulkBuilder.execute().actionGet();
if(bulkRes.hasFailures()){
logger.error("##### Bulk Request failure with error: " + bulkRes.buildFailureMessage());
}
}
}
br.close();
if(bulkBuilder.numberOfActions() > 0){
logger.info("##### " + bulkBuilderLength + " data indexed.");
BulkResponse bulkRes = bulkBuilder.execute().actionGet();
if(bulkRes.hasFailures()){
logger.error("##### Bulk Request failure with error: " + bulkRes.buildFailureMessage());
}
bulkBuilder = client.prepareBulk();
}
It works fine but the performance getting SLOW DOWN RAPIDLY after thousands of document.
I've already tried to change settings value of "refresh_interval" as -1 and "number_of_replicas" as 0.
However, the situation of performance decreasing is the same.
If I monitor the status of my cluster using bigdesk, the GC value reaches 1 in every seconds like the screenshot below.
Anyone can help me?
Thanks in advance.
=================== UPDATED ===========================
Finally, I've solved this problem. (See the answer).
The cause of the problem is that I've missed recreate a new BulkRequestBuilder.
Performance degradation is never occurred after I've changed my code snippet like below.
Thank you very much.
Settings settings = ImmutableSettings.settingsBuilder()
.put("cluster.name", "MyClusterName").build();
TransportClient client = new TransportClient(settings);
String hostname = "myhost ip";
int port = 9300;
client.addTransportAddress(new InetSocketTransportAddress(hostname, port));
BulkRequestBuilder bulkBuilder = client.prepareBulk();
BufferedReader br = new BufferedReader(new InputStreamReader(new DataInputStream(new FileInputStream("my_file_path"))));
long bulkBuilderLength = 0;
String readLine = "";
String index = "my_index_name";
String type = "my_type_name";
String id = "";
while((readLine = br.readLine()) != null){
id = somefunction(readLine);
String json = new ObjectMapper().writeValueAsString(readLine);
bulkBuilder.add(client.prepareIndex(index, type, id)
.setSource(json));
bulkBuilderLength++;
if(bulkBuilderLength % 1000== 0){
logger.info("##### " + bulkBuilderLength + " data indexed.");
BulkResponse bulkRes = bulkBuilder.execute().actionGet();
if(bulkRes.hasFailures()){
logger.error("##### Bulk Request failure with error: " + bulkRes.buildFailureMessage());
}
bulkBuilder = client.prepareBulk(); // This line is my mistake and the solution !!!
}
}
br.close();
if(bulkBuilder.numberOfActions() > 0){
logger.info("##### " + bulkBuilderLength + " data indexed.");
BulkResponse bulkRes = bulkBuilder.execute().actionGet();
if(bulkRes.hasFailures()){
logger.error("##### Bulk Request failure with error: " + bulkRes.buildFailureMessage());
}
bulkBuilder = client.prepareBulk();
}
The problem here is that you don't recreate again a new Bulk after Bulk execution.
It means that you are reindexing the same first data again and again.
BTW, look at BulkProcessor class. Definitely better to use.

How to add image dynamically in the jasper reports in java

Hii Guys !!!
I designed a jasper report to export into pdf which contains image that is stored in my local machine.Now As per my need i need to add the image dynamically from the projects classpath .Below I am posting my code.plz guys help me how to add image dynamically ...
File tempFile = File.createTempFile(getClass().getName(), ".pdf");
try {
FileOutputStream fos = new FileOutputStream(tempFile);
try {
ServletOutputStream servletOutputStream = response.getOutputStream();
InputStream reportStream = getServletConfig().getServletContext().getResourceAsStream("jasperpdf.jasper");
try {
String datum1 = request.getParameter("fromdate");
String datum2 = request.getParameter("todate");
SimpleDateFormat sdfSource = new SimpleDateFormat("dd-MM-yyyy");
Date date = sdfSource.parse(datum1);
Date date2 = sdfSource.parse(datum2);
SimpleDateFormat sdfDestination = new SimpleDateFormat("yyyy-MM-dd");
datum1 = sdfDestination.format(date);
System.out.println(datum1);
datum2 = sdfDestination.format(date2);
System.out.println(datum2);
String strQuery = "";
ResultSet rs = null;
conexion conexiondb = new conexion();
conexiondb.Conectar();
strQuery = "Select calldate,src,dst,duration,disposition,cdrcost from cdrcost where date(calldate) between '" + datum1 + "' and '" + datum2 + "'";
rs = conexiondb.Consulta(strQuery);
JRResultSetDataSource resultSetDataSource = new JRResultSetDataSource(rs);
JasperRunManager.runReportToPdfStream(reportStream, fos, new HashMap(), resultSetDataSource);
rs.close();
Is it working when you have provided the relative path of the image? i.e. images/image.jpg You should have a folder named images in your project and inside that there should be the file image.jpg ..
i'm newbie for jasper report, may be this code useful for you
private static JRDesignImage getImage(int x_postion, int y_position, int width, int height,ScaleImageEnum scale_type, HorizontalAlignEnum align_type,
JRDesignExpression expression) {
JRDesignImage image = new JRDesignImage(null);
image.setX(0);
image.setY(8);
image.setWidth(97);
image.setHeight(50);
image.setScaleImage(ScaleImageEnum.RETAIN_SHAPE);
image.setHorizontalAlignment(HorizontalAlignEnum.LEFT);
image.setExpression(expression);
// TODO Auto-generated method stub
return image;
}
then add
band = new JRDesignBand();
band.setHeight(73);
expression = new JRDesignExpression();
expression.setValueClass(java.lang.String.class);
expression.setText("$P{imagePath}");
// jasperDesign.addField();
band.addElement(getImage(0,8,97,50,ScaleImageEnum.RETAIN_SHAPE,HorizontalAlignEnum.LEFT,expression));

glassfish 3.1.2 - ResultSetWrapper40 cannot be cast to oracle.jdbc.OracleResultSet

I recently migrate from glassfish 3.1.1 to 3.1.2 and I got the following error
java.lang.ClassCastException: com.sun.gjc.spi.jdbc40.ResultSetWrapper40 cannot be cast to oracle.jdbc.OracleResultSet
at the line
oracle.sql.BLOB bfile = ((OracleResultSet) rs).getBLOB("filename");
in the following routine:
public void fetchPdf(int matricola, String anno, String mese, String tableType, ServletOutputStream os) {
byte[] buffer = new byte[2048];
String query = "SELECT filename FROM "
+ tableType + " where matricola = " + matricola
+ " and anno = " + anno
+ ((tableType.equals("gf_blob_ced") || tableType.equals("gf_blob_car")) ? " and mese = " + mese : "");
InputStream ins = null;
//--------
try {
Connection conn = dataSource.getConnection();
//Connection conn = DriverManager.getConnection(connection, "glassfish", pwd);
java.sql.Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery(query);
if (rs.next()) {
logger.info("select ok " + query);
oracle.sql.BLOB bfile = ((OracleResultSet) rs).getBLOB("filename");
ins = bfile.getBinaryStream();
int length;
while ((length = (ins.read(buffer))) >= 0) {
os.write(buffer, 0, length);
}
ins.close();
} else {
logger.info("select Nok " + query);
}
rs.close();
stmt.close();
//conn.close();
} catch (IOException ex) {
logger.warn("blob file non raggiungibile: "+query);
} catch (SQLException ex) {
logger.warn("connessione non riuscita");
}
}
I'm using the glassfish connection pool
#Resource(name = "jdbc/ape4")
private DataSource dataSource;
and the jdbc/ape4 resource belongs to an oracle connection pool with the following param
NetworkProtocol tcp
LoginTimeout 0
PortNumber 1521
Password xxxxxxxx
MaxStatements 0
ServerName server
DataSourceName OracleConnectionPoolDataSource
URL jdbc:oracle:thin:#server:1521:APE4
User glassfish
ExplicitCachingEnabled false
DatabaseName APE4
ImplicitCachingEnabled false
The oracle driver is ojdbc6.jar, oracle DB is 10g.
Could anyone help me what is happening? On Glassfish 3.1.1 it was working fine.
There is no need for not using standard JDBC api in this code. You are not using any Oracle-specific functionality so rs.getBlob("filename").getBinaryStream() will work just as well.
If you insist on keeping this, turn off JDBC Object wrapping option for your datasource.

Resources