InputStream read is blocked while reading BLOB from Oracle column through JDBC - jdbc

While trying to read from an InputStream obtained from a BLOB like below code, the InputStream goes to indefinite wait state.
I set this data from a web application running on Jboss app server and read/write works absolutely fine, problem is while running a standalone java code using plain JDBC.
Environment is JDK6, Oracle 10g.
ResultSet rs = this.stmt.executeQuery();
log.println("ResultSetType: " + (rs != null ? rs.getClass() : null));
while (rs != null && rs.next()) {
. . . // read other columns
Blob savedBlob = rs.getBlob("PERSISTENCE_BLOB");
long len = savedBlob.length();
log.println("Going to read bytes..." + len);
InputStream is = savedBlob.getBinaryStream();
log.println("IS Received...");
log.println("Available : " + is.available());
ObjectInputStream oip = new ObjectInputStream(is);
Object obj = oip.readObject();
oip.close();
is.close();
savedBlob.free();
. . .
Output is as Below...
ResultSetType: class oracle.jdbc.driver.OracleResultSetImpl
RowID: XXXXXXXXXXXXXXX // Row is selected and printed properly
Going to read bytes...6022
IS Received...
Available : 0
But if i try to read as below in chucks.. it works fine, which i don't want as i am reading a serialized object and want to have ObjectInputStream opend from InputStream.
. . .
ResultSet rs = this.stmt.executeQuery();
log.println("ResultSetType: " + (rs != null ? rs.getClass() : null));
while (rs != null && rs.next()) {
. . .
Blob savedBlob = rs.getBlob("PERSISTENCE_BLOB");
long len = savedBlob.length();
int start = 1;
int totalBytesRead = 0;
int buffSize = 2048;
byte[] byteBuff = null;
log.println("Going to read bytes..." + len);
do {
byteBuff = new byte[buffSize];
byteBuff = savedBlob.getBytes(start, buffSize);
totalBytesRead += buffSize;
log.println(start + "," + buffSize + " #BLOB bytes: " + new String(byteBuff));
start += buffSize;
. . .
} while (. . . );
log.println("Total Bytes: " + totalBytesRead);
Output:
ResultSetType: class oracle.jdbc.driver.OracleResultSetImpl
Going to read bytes...6022
1,2048 #BLOB bytes: //......bytes data..........
.....
Total Bytes: 6022

InputStream.available() doesn't indicate how much you can read, it indicates how much it can return to you (eg from a buffer), without going into a - potentially - blocking read operation.
The Javadoc also indicates:
Note that while some implementations of InputStream will return the
total number of bytes in the stream, many will not. It is never
correct to use the return value of this method to allocate a buffer
intended to hold all data in this stream.
and
The available method for class InputStream always returns 0.
So instead of using available() as any sort of indication, just read it (which clearly works as indicated by your other code).

Related

java : What is the exact functionality of buffer.flip() method?

try (FileOutputStream binFile = new FileOutputStream("data.dat");
FileChannel binChannel = binFile.getChannel()) {
ByteBuffer buffer = ByteBuffer.allocate(100);
byte[] outputBytes = "Hello World!".getBytes();
buffer.put(outputBytes);
long int1Pos = outputBytes.length;
buffer.putInt(245);
binChannel.write(buffer);
java.io.RandomAccessFile ra = new java.io.RandomAccessFile("data.dat", "rwd");
FileChannel channel = ra.getChannel();
ByteBuffer readBuffer = ByteBuffer.allocate(100);
channel.position(int1Pos);
channel.read(readBuffer);
readBuffer.flip();
System.out.println("Int3 = " + readBuffer.getInt());
} catch(IOException e){
}
You should checkout the Java docs on it. https://docs.oracle.com/javase/7/docs/api/java/nio/Buffer.html
Flips this buffer. The limit is set to the current position and then the position is set to zero. If the mark is defined then it is discarded.
After a sequence of channel-read or put operations, invoke this method to prepare for a sequence of channel-write or relative get operations. For example:

Azure Page Blob OpenRead does not fetch more than StreamMinimumReadSizeInBytes

I have a page blob containing effectively log data. Everything works fine until the log fills up past 2 MB.
When Reading, I'm using the OpenReadAsync method to get a stream from which I read data out of. Prior to calling OpenReadAsync, I set StreamMinimumReadSizeInBytes to 2MB (2 * 1024 * 1024).
After opening the stream, I use the following method to read data out.
public IEnumerable<object> Read(Stream pageAlignedEventStream, long? maxBytes = null)
{
while (pageAlignedEventStream.Position < (maxBytes ?? pageAlignedEventStream.Length))
{
byte[] bytesToReadBuffer = new byte[LongZero.Length];
pageAlignedEventStream.Read(bytesToReadBuffer, 0, LongZero.Length);
long bytesToRead = BitConverter.ToInt64(bytesToReadBuffer, 0);
if (bytesToRead == 0)
{
yield break;
}
if (bytesToRead < 0)
{
throw new InvalidOperationException("Invalid size specification. Stream may be corrupted.");
}
if (bytesToRead > Int32.MaxValue)
{
throw new InvalidOperationException("Payload size is too large.");
}
byte[] payload = new byte[bytesToRead];
int read = pageAlignedEventStream.Read(payload, 0, (int) bytesToRead);
if (read != bytesToRead)
{
// when fails, read == 503, bytesToRead = 3575, position = 2MB (2*1024*14024)
throw new InvalidOperationException("Did not read expected number of bytes.");
}
yield return this.EventSerializer.DeserializeFromStream(new MemoryStream(payload, false));
var paddedSpaceToSkip = PagesRequired(bytesToRead) * PageSizeBytes - bytesToRead - LongZero.Length;
pageAlignedEventStream.Position += paddedSpaceToSkip;
}
yield break;
}
As noted in the comments in the code, the failure happends when the position reaches the 2MB specified. The read fails to pull additional bytes before returning and only reads 503 bytes instead of the expected 3575 bytes.
My expectation was that as I read past the buffer size, it would download more data.
I found a similar issue on Azure Feedback, but linked issue indicates a non-power-of-2 buffersize but 2MB is definitely power of 2.
I could fetch the all data (Size=3MB) that stored in a page blob even though I set StreamMinimumReadSizeInBytes property of CloudPageBlob to 2MB.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
CloudConfigurationManager.GetSetting("StorageConnectionString"));
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer container = blobClient.GetContainerReference("mycontainername");
container.CreateIfNotExists();
CloudPageBlob pageBlob = container.GetPageBlobReference("mypageblob");
pageBlob.StreamMinimumReadSizeInBytes = 2 * 1024 * 1024;
Task<Stream> pageAlignedEventStream = pageBlob.OpenReadAsync();
The read fails to pull additional bytes before returning and only reads 503 bytes instead of the expected 3575 bytes.
If that many bytes are not currently available and the end of the stream has been reached, the returned value could be less than the number of bytes requested. Please debug your code to trace the changes of variable of paddedSpaceToSkip and check whether your code logic is as expected.

Hbase scan with offset

Is there a way to scan a HBase table getting, for example, the first 100
results, then later get the next 100 and so on... Just like in SQL we do
with LIMIT and OFFSET?
My row keys are uuid
You can do it multiple ways. The easiest one is a page filter. Below is the code example from HBase: The Definitive Guide, page 150.
private static final byte[] POSTFIX = new byte[] { 0x00 };
Filter filter = new PageFilter(15);
int totalRows = 0; byte[] lastRow = null;
while (true) {
Scan scan = new Scan();
scan.setFilter(filter);
if (lastRow != null) {
byte[] startRow = Bytes.add(lastRow, POSTFIX);
System.out.println("start row: " + Bytes.toStringBinary(startRow));
scan.setStartRow(startRow);
}
ResultScanner scanner = table.getScanner(scan);
int localRows = 0;

 Result result;

 while ((result = scanner.next()) != null) {
System.out.println(localRows++ + ": " + result);
totalRows++;
lastRow = result.getRow();
}
scanner.close();
if (localRows == 0) break;
}

System.out.println("total rows: " + totalRows);
Or you can set catching on scan for the limit you want and then change the start row to the last row + 1 from the prev scan for every get.

Image Transmit to Intermec PM4i printer and then Print

I'm using Fingerprint to upload and then print image with pcx format.
Step1 Upload image to printer using TCP port, I use command :
IMAGE LOAD "bigfoot.1",1746,""\r\n
The printer returns with message "OK".
And then I send bytes data of bigfoot.1 to printer using socket.
Step 2 Print the image "bigfoot.1":
PRPOS 200,200
DIR 3
ALIGN 5
PRIMAGE "bigfoot.1"
PRINTFEED
RUN
The problem comes, the printer returns with message "Image not found". So I come up with the possibility of failure of upload. So I open the software PrintSet4 to check the image, the image already exists in TMP.Odd!!!
At last, I used PrintSet4 to substitute my socket application to upload image, After add file and apply, I use the step2 print command to print image, It works fine!
Here is the C# code to upload Image:
public void SendFile(string filePath, string CR_LF)
{
FileInfo fi = new FileInfo(filePath);
using (FileStream fs = new FileStream(filePath, FileMode.Open, FileAccess.Read))
{
byte[] byteFile = new byte[fs.Length];
string cmd = "IMAGE LOAD \"" + fi.Name + "\"," + byteFile.Length.ToString() + ",\" \"" + CR_LF;
ClientSocket.Send(encode.GetBytes(cmd));
fs.Read(byteFile, 0, byteFile.Length);
Thread.Sleep(1000);
ClientSocket.Send(byteFile);
}
}
I have modified your code and used serial port.
public void SendFile(string filePath)
{
SerialPort port = new SerialPort("COM3", 38400, Parity.None, 8, StopBits.One);
port.Open();
FileInfo fi = new FileInfo(filePath);
using (FileStream fs = new FileStream(filePath, FileMode.Open, FileAccess.Read))
{
byte[] byteFile = new byte[fs.Length];
// string cmd = "IMAGE LOAD \"" + fi.Name + "\"," + teFile.Length.ToString()+ ",\"\"" + CR_LF;
string cmd = "IMAGE LOAD " + "\"" + fi.Name + "\"" + "," + byteFile.Length.ToString() + "," + "\"S\"";
port.WriteLine(cmd);
fs.Read(byteFile, 0, byteFile.Length);
port.Write(byteFile,0,byteFile.Count());
int count = byteFile.Count();
int length = byteFile.Length;
}
}
So I noticed the problem was using CR_LF. Instead, I used port.WriteLine(cmd), which acts the same as adding a line separator. And it worked fine.

mariadb jdbc driver blob update not supported

After I replaced mysql jdbc driver 5.1 with mariadb jdbc driver 1.1.5 and tested the existing code base that connected with MySQL Server 5.0 and MariaDB Server 5.2, everything works fine except a JDBC call to update a blob field in a table.
The blob field contains XML configuration file. It can be read out, and convert to xml and insert some values.
Then convert it to ByteArrayInputStream object, and call the method
statement.updateBinaryStream(columnLabel, the ByteArrayInputStream object, its length)
but an exception is thrown:
Perhaps you have some incorrect SQL syntax?
java.sql.SQLFeatureNotSupportedException: Updates are not supported
at
org.mariadb.jdbc.internal.SQLExceptionMapper.getFeatureNotSupportedException(SQLExceptionMapper.java:165)
at
org.mariadb.jdbc.MySQLResultSet.updateBinaryStream(MySQLResultSet.java:1642)
at
org.apache.commons.dbcp.DelegatingResultSet.updateBinaryStream(DelegatingResultSet.java:511)
I tried updateBlob method, the same exception was thrown.
The code works well with mysql jdbc driver 5.1.
Any suggestions on how to work around with this situation?
See the ticket updating blob with updateBinaryStream, which in commnet states that it isn't supported.
A workaround would be to use two SQL statements. One which is used to select the data and other to update the data. Something like this:
final Statement select = connection.createStatement();
try {
final PreparedStatement update = connection.prepareStatement( "UPDATE table SET blobColumn=? WHERE idColumn=?" );
try {
final ResultSet selectSet = select.executeQuery( "SELECT idColumn,blobColumn FROM table" );
try {
final int id = selectSet.getInt( "idColumn" );
final InputStream stream = workWithSTreamAndRetrunANew( selectSet.getBinaryStream( "blobColumn" ) ) );
update.setBinaryStream( 1,stream );
update.setInt( 2,id );
update.execute();
}
finally {
if( selectSet != null )
selectSet.close();
}
}
finally {
if( update != null )
update.close();
}
}
finally {
if( select != null )
select.close();
}
But be aware that you need some information how to uniquely identify a table entry, in this example the column idColumn was used for that purpose. Furthermore is you stored empty stream in the
database you might get an SQLException.
A simpler work around is using binary literals (like X'2a4b54') and concatenation (UPDATE table SET blobcol = blobcol || X'2a4b54') like this:
int iBUFSIZ = 4096;
byte[] buf = new byte[iBUFSIZ];
int iLength = 0;
int iUpdated = 1;
for (int iRead = stream.read(buf, 0, iBUFSIZ);
(iUpdated == 1) && (iRead != -1) && (iLength < iTotalLength);
iRead = stream.read(buf, 0, iBUFSIZ))
{
String sValue = "X'" + toHex(buf,0,iRead) + "'";
if (iLength > 0)
sValue = sBlobColumn + " || " + sValue;
String sSql = "UPDATE "+sTable+" SET "+sBlobColumn+"= "+sValue;
Statement stmt = connection.createStatement();
iUpdated = stmt.executeUpdate(sSql);
stmt.close();
}

Resources