I am trying to do some image manipulation using IConverter class which is included in Xuggle library to convert the images from IVideoPicture type to BufferedImage type but am encountering the error in the title.
Here is my code:
BufferedImage orgnlimage = new BufferedImage(Picture.getWidth(), Picture.getHeight(), BufferedImage.TYPE_3BYTE_BGR);
IConverter converter = ConverterFactory.createConverter(orgnlimage, IPixelFormat.Type.BGR24);
orgnlimage = converter.toImage(Picture); // Exception on this line
The dimensions of the image in question, is 360x360.
This is the exception I'm getting:
Exception in thread "main" java.awt.image.RasterFormatException: Data array too small (should be > 388799 )
at sun.awt.image.ByteComponentRaster.verify(ByteComponentRaster.java:947)
at sun.awt.image.ByteComponentRaster.<init>(ByteComponentRaster.java:201)
at sun.awt.image.ByteInterleavedRaster.<init>(ByteInterleavedRaster.java:191)
at sun.awt.image.ByteInterleavedRaster.<init>(ByteInterleavedRaster.java:113)
at java.awt.image.Raster.createWritableRaster(Raster.java:980)
at com.xuggle.xuggler.video.BgrConverter.toImage(BgrConverter.java:195)
at xuggler.Encrypt.main(Encrypt.java:53)
at xuggler.DecodeAndSaveAudioVideo.main(DecodeAndSaveAudioVideo.java:141)
My Second attemp :
public IVideoPicture main(IVideoPicture Picture) throws NoSuchPaddingException, IllegalBlockSizeException, BadPaddingException, IOException
{
int width = Picture.getWidth();
int height = Picture.getHeight();
long timestamp = Picture.getTimeStamp();
BufferedImage orgnlimage = videoPictureToImage(Picture);
byte[] orgnlimagebytes =toByte(orgnlimage);
byte[] encryptedbytes = encrypt(orgnlimagebytes, "abc");
//System.out.println(encryptedbytes.length);
BufferedImage encryptedimage = toImage(encryptedbytes, width, height);
String desc = ConverterFactory.findDescriptor(encryptedimage);
IConverter converter = ConverterFactory.createConverter(desc, Picture);
IVideoPicture Pic = converter.toPicture(encryptedimage, timestamp);
return Pic;
}
and the stack trace :
Exception in thread "main" java.nio.BufferOverflowException
at java.nio.DirectByteBuffer.put(DirectByteBuffer.java:363)
at java.nio.ByteBuffer.put(ByteBuffer.java:859)
at com.xuggle.xuggler.video.BgrConverter.toPicture(BgrConverter.java:132)
at xuggler.Encrypt.main(Encrypt.java:62)
at xuggler.DecodeAndSaveAudioVideo.main(DecodeAndSaveAudioVideo.java:141)
The problem is that for some reason, the Xuggler BgrConverter.toImage() method tries to create a raster around a byte array of size 388799, which is one byte short... It should have been of size 388800 (360 * 360 * 3) for your image in BGR format.
I'd say file a bug report.
Or try Humble Video instead, which seems to kind of a successor to Xuggler.
Related
I want to create a GIF with a Logical Screen Descriptor larger than any image that I have in my gif image sequence. Each image in the gif will have its top and left offset modified. Here's the code I have that looks like it ought to work, but it doesn't
void test() throws IOException {
Image image1 = textToImage ("m",12.0 );
Image image2 = textToImage("n", 24.0);
Image[] images = {image2, image1};
String[] imageTopOffset = {"6", "30"};
String[] imageLeftOffset = {"6", "36"};
ImageWriter iw = ImageIO.getImageWritersByFormatName("gif").next();
ImageWriteParam params = iw.getDefaultWriteParam();
int type = ((BufferedImage)getRenderedImage(image1)).getType();
ImageTypeSpecifier imageTypeSpecifier = ImageTypeSpecifier.createFromBufferedImageType(type);
IIOMetadata metadata = iw.getDefaultImageMetadata(imageTypeSpecifier, params);
IIOMetadataNode root = (IIOMetadataNode)metadata.getAsTree(metadata.getNativeMetadataFormatName());
IIOMetadataNode lsdNode = getNode(root, "LogicalScreenDescriptor");
lsdNode.setAttribute("logicalScreenHeight", "100");
lsdNode.setAttribute("logicalScreenWidth", "75");
IIOMetadataNode graphicsControlExtensionNode = getNode(root, "GraphicControlExtension");
graphicsControlExtensionNode.setAttribute("disposalMethod", "none");
graphicsControlExtensionNode.setAttribute("userInputFlag", "FALSE");
graphicsControlExtensionNode.setAttribute("transparentColorFlag", "FALSE");
graphicsControlExtensionNode.setAttribute("delayTime", "100");
graphicsControlExtensionNode.setAttribute("transparentColorIndex", "0");
IIOMetadataNode commentsNode = getNode(root, "CommentExtensions");
commentsNode.setAttribute("CommentExtension", "Created by: http://example.com");
IIOMetadataNode appExtensionsNode = getNode(root, "ApplicationExtensions");
IIOMetadataNode child = new IIOMetadataNode("ApplicationExtension");
child.setAttribute("applicationID", "NETSCAPE");
child.setAttribute("authenticationCode", "2.0");
boolean loop = true;
int loopContinuously = loop ? 0 : 1;
child.setUserObject(new byte[]{ 0x1, (byte) (loopContinuously & 0xFF), (byte) ((loopContinuously >> 8) & 0xFF)});
appExtensionsNode.appendChild(child);
ByteArrayOutputStream os = new ByteArrayOutputStream();
ImageOutputStream ios = ImageIO.createImageOutputStream(os);
iw.setOutput(ios);
iw.prepareWriteSequence(metadata);
int i = 0;
for (Image image : images) {
graphicsControlExtensionNode = getNode(root, "GraphicControlExtension");
graphicsControlExtensionNode.setAttribute("delayTime", "50");
IIOMetadataNode imageDescriptorNode = getNode(root, "ImageDescriptor");
imageDescriptorNode.setAttribute("imageLeftPosition", imageLeftOffset[i]);
imageDescriptorNode.setAttribute("imageTopPosition", imageTopOffset[i]);
imageDescriptorNode.setAttribute("imageWidth", String.valueOf(image.getWidth()));
imageDescriptorNode.setAttribute("imageHeight", String.valueOf(image.getHeight()));
imageDescriptorNode.setAttribute("interlaceFlag", "FALSE");
IIOImage ii = new IIOImage(getRenderedImage(image), null, metadata);
iw.writeToSequence(ii, params);
i++;
}
iw.endWriteSequence();
ios.close();
byte[] gifContent = os.toByteArray();
os.close();
File outputFile = new File("test.gif");
try (FileOutputStream outputStream = new FileOutputStream(outputFile)) {
outputStream.write(gifContent);
outputStream.close();
}
}
private WritableImage textToImage(String text, Double size) {
Text t = new Text();
t.setFont(getFont("Calibi",
"NORMAL",
"REGULAR",
size));
t.setStroke(Color.BLACK);
t.setText(text);
Scene scene = new Scene(new StackPane(t));
return t.snapshot(null, null);
}
IIOMetadataNode getNode(IIOMetadataNode rootNode, String name) {
NodeList childNodes = rootNode.getChildNodes();
for (int i=0; i<childNodes.getLength(); i++) {
if (childNodes.item(i).getNodeName().equals(name) ) {
return (IIOMetadataNode)childNodes.item(i);
}
}
// no child node with the given name found, create one!
IIOMetadataNode metadataNode = new IIOMetadataNode(name);
rootNode.appendChild(metadataNode);
return metadataNode;
}
Font getFont(String fontname, String fontWeight, String fontPosture, double size) {
FontPosture posture = FontPosture.valueOf(fontPosture);
FontWeight weight = FontWeight.valueOf(fontWeight);
Font font = Font.font (fontname, weight, posture, size);
return font;
}
public RenderedImage getRenderedImage(Image image) {
return SwingFXUtils.fromFXImage(image, null);
}
The gif image it produces is the size of the first image in the sequence even though I set the LogicalScreenDescriptor to a bigger size than the image that gets written out. The actual size of the gif is the size of the 1st image. The other problem is that imageTopPosition and imageLeftPosition doesn't get applied.
The two images are of different sizes. The two images are generated, one image is a 12 point image of the letter m, and the other image is a 24 point image of the letter n.
So how do I make a larger logical screen descriptor and how do I change the image descriptor offsets. Although the above code looks like it should work, it doesn't. Most examples I've found assume that all images in a gif are the same size and that the display of subsequent images in the gif completely replace the previous image.
Here are the coding changes I made that solved the problem for the gif not displaying properly:
void test() throws IOException {
Image image1 = textToImage ("m",12.0 );
Image image2 = textToImage("n", 24.0);
Image[] images = {image2, image1};
String[] imageTopOffset = {"6", "30"};
String[] imageLeftOffset = {"6", "36"};
ImageWriter iw = ImageIO.getImageWritersByMIMEType("image/gif").next();
ImageWriteParam params = iw.getDefaultWriteParam();
int type = ((BufferedImage)getRenderedImage(image1)).getType();
ImageTypeSpecifier imageTypeSpecifier = ImageTypeSpecifier.createFromBufferedImageType(type);
IIOMetadata imageMetadata = iw.getDefaultImageMetadata(imageTypeSpecifier, params);
IIOMetadata streamMetadata = iw.getDefaultStreamMetadata(params);
IIOMetadataNode streamRoot = (IIOMetadataNode)streamMetadata.getAsTree(streamMetadata.getNativeMetadataFormatName());
IIOMetadataNode imageRoot = (IIOMetadataNode)imageMetadata.getAsTree(imageMetadata.getNativeMetadataFormatName());
ByteArrayOutputStream os = new ByteArrayOutputStream();
ImageOutputStream ios = ImageIO.createImageOutputStream(os);
iw.setOutput(ios);
IIOMetadataNode lsdNode = getNode(streamRoot, "LogicalScreenDescriptor");
lsdNode.setAttribute("logicalScreenHeight", "100");
lsdNode.setAttribute("logicalScreenWidth", "75");
/*
* The following extension nodes may not be put in the streamMetadata. If you do add them
* to the streamMetadata you'll get any error when you prepareWriteSequence
*
IIOMetadataNode graphicsControlExtensionNode = getNode(streamRoot, "GraphicControlExtension");
graphicsControlExtensionNode.setAttribute("disposalMethod", "none");
graphicsControlExtensionNode.setAttribute("userInputFlag", "FALSE");
graphicsControlExtensionNode.setAttribute("transparentColorFlag", "FALSE");
graphicsControlExtensionNode.setAttribute("delayTime", "100");
graphicsControlExtensionNode.setAttribute("transparentColorIndex", "0");
IIOMetadataNode commentsNode = getNode(streamRoot, "CommentExtensions");
commentsNode.setAttribute("CommentExtension", "Created by: http://example.com");
IIOMetadataNode appExtensionsNode = getNode(streamRoot, "ApplicationExtensions");
IIOMetadataNode child = new IIOMetadataNode("ApplicationExtension");
child.setAttribute("applicationID", "NETSCAPE");
child.setAttribute("authenticationCode", "2.0");
boolean loop = true;
int loopContinuously = loop ? 0 : 1;
child.setUserObject(new byte[]{ 0x1, (byte) (loopContinuously & 0xFF), (byte) ((loopContinuously >> 8) & 0xFF)});
appExtensionsNode.appendChild(child);
*/
streamMetadata.setFromTree(streamMetadata.getNativeMetadataFormatName(), streamRoot);
iw.prepareWriteSequence(streamMetadata);
int i = 0;
for (Image image : images) {
IIOMetadataNode graphicsControlExtensionNode = getNode(imageRoot, "GraphicControlExtension");
graphicsControlExtensionNode.setAttribute("delayTime", "50");
IIOMetadataNode imageDescriptorNode = getNode(imageRoot, "ImageDescriptor");
imageDescriptorNode.setAttribute("imageLeftPosition", imageLeftOffset[i]);
imageDescriptorNode.setAttribute("imageTopPosition", imageTopOffset[i]);
imageMetadata.setFromTree(imageMetadata.getNativeMetadataFormatName(),imageRoot);
IIOImage ii = new IIOImage(getRenderedImage(image), null, imageMetadata);
iw.writeToSequence(ii, params);
i++;
}
iw.endWriteSequence();
ios.close();
byte[] gifContent = os.toByteArray();
os.close();
File outputFile = new File("test.gif");
try (FileOutputStream outputStream = new FileOutputStream(outputFile)) {
outputStream.write(gifContent);
outputStream.close();
}
}
The prepareWriteSequence command did not like streamMetadata that contains the extensions graphicControlExtension and ApplicationExtensions. I figured that out by examining the source code for GifImageWriter. There's also some problem with the value I provided to the set "imageWidth and "imageHeight". Not sure what the value for those attributes should look like. I just avoid that problem by not setting those values.
The output is a 75x100 gif with a 12pt letter m offset by 30 from the top and 36 from the left and a 24 point letter n offset 6 from the top and 6 from the left.
Inserting smaller images work fine. However, when the image file gets bigger than about 2MB, I get an OutOfMemoryException.
I tried with SqlCeUpdatableRecord and a parameterized SqlCeCommand. Using the SqlCeCommand, the exception is raised at ExecuteNonQuery().
With SqlCeUpdatableRecord, the exception is raised at SqlCeUpdatableRecord.SetBytes().
I’ve tried increasing buffer size and temp file size without it seeming to have an effect. I've debugged with GetTotalMemory and there seems to be plenty of resources available.
Any tips would be highly appreciated.
The SQL Server CE database is synchronized with a main SQL Server database and handling images as separate files would introduce a truckload of other issues. The solution has worked fine for years as most WM handhelds only capture images at 5MP or less. However, newer models with support for 8MP images is causing issues.
Here is a simplified version of the SqlCeUpdateblRecord implementation:
System.Data.SqlServerCe.SqlCeUpdatableRecord newRecord = base.CreateRecord();
newRecord["ImageId"] = ImageId;
newRecord["ImageType"] = ImageType;
//64kb buffer (Have tested with different buffer sizes (up to 8MB)
newRecord.SetBytesFromFile("ImageFull", ImageFullFileName, 65536);
newRecord["ImageThumb"] = ImageThumb;
newRecord["ImageDate"] = ImageDate;
base.Insert(newRecord);
.....
public static void SetBytesFromFile(this SqlCeUpdatableRecord obj, string name, string filename, int buffersize)
{
int _column = obj.GetOrdinal(name);
byte[] buffer = new byte[buffersize];
int bytesread;
long offset = 0;
using (FileStream fs = new FileStream(filename,FileMode.Open))
{
bytesread = fs.Read(buffer, 0, buffersize);
while (bytesread > 0)
{
//This will raise OutOfMemoryException for imagefiles larger than appx. 2mb, regardless of buffersize
obj.SetBytes(_column, offset, buffer, 0, bytesread);
offset = offset + (long)bytesread;
bytesread = fs.Read(buffer, 0, buffersize);
}
}
}
Here is a simplified version of the paramterized query:
using (var cmdInsert = new SqlCeCommand("INSERT INTO [Image_CF] ( [ImageId], [ImageType], [ImageFull], [ImageThumb], [ImageDate]) VALUES " +
" ( #ImageId, #ImageType, #ImageFull, #ImageThumb, #ImageDate)"))
{
cmdInsert.Connection = Database.IrisPdaConnection;
cmdInsert.Parameters.Add("#ImageId", System.Data.SqlDbType.Int).Value = ImageId;
cmdInsert.Parameters.Add("#ImageType", System.Data.SqlDbType.NVarChar).Value = ImageType;
cmdInsert.Parameters.Add("#ImageFull", System.Data.SqlDbType.Image).Value = GetFileAsBytes(ImageFullFileName);
cmdInsert.Parameters.Add("#ImageThumb", System.Data.SqlDbType.Image).Value = ImageThumb;
cmdInsert.Parameters.Add("#ImageDate", System.Data.SqlDbType.NVarChar).Value = ImageDate;
// OutOfMemoryException for images larger than appx. 2MB
cmdInsert.ExecuteNonQuery();
}
public byte[] GetFileAsBytes(string filename)
{
FileStream fs;
byte[] result;
using (fs = new FileStream(filename, FileMode.Open, FileAccess.Read))
{
// a byte array to read the image
result = new byte[fs.Length];
fs.Read(result, 0, System.Convert.ToInt32(fs.Length));
}
return result;
}
in my wp7 application i am selecting image from media library and i want to get base64 string of that image because i am sending it to my wcf service to create image on server. the code for getting base64 string is as follows:
void taskToChoosePhoto_Completed(object sender, PhotoResult e)
{
if (e.TaskResult == TaskResult.OK)
{
fileName = e.OriginalFileName;
selectedPhoto = PictureDecoder.DecodeJpeg(e.ChosenPhoto);
imgSelected.Source = selectedPhoto;
int[] p = selectedPhoto.Pixels;
int len = p.Length * 4;
result = new byte[len]; // ARGB
Buffer.BlockCopy(p, 0, result, 0, len);
base64 = System.Convert.ToBase64String(result);
}
}
but at server this code creates image file but in the format is invalid. I cross validated the base64 string but i think app is giving wrong base64string what could be the reason please help to find out the problem.
You are sending base64-encoded pixels on the server. I'm not sure that this is what you need. How about converting Stream to the base64 string?
var memoryStream = new MemoryStream();
e.ChosenPhoto.CopyTo(memoryStream);
byte[] result = memoryStream.ToArray();
base64 = System.Convert.ToBase64String(result);
I'm having a really interesting problem to solve:
I'm getting a static google map image, with an URL like this.
I've tried several methods to get this information:
Fetching the "remote resource" as a ByteArrayOutputStream, storing the Image in the SD of the Simulator, an so on... but every freaking time I get an IlegalArgumentException.
I always get a 200 http response, and the correct MIME type ("image/png"), but either way: fetching the image and converting it to a Bitmap, or storing the image in the SD and reading it later; I get the same result... the file IS always corrupt.
I really belive its an encoding problem, or the reading method (similar to this one):
public static Bitmap downloadImage(InputStream inStream){
byte[] buffer = new byte[256];
ByteArrayOutputStream baos = new ByteArrayOutputStream();
while (inStream.read(buffer) != -1){
baos.write(buffer);
}
baos.flush();
baos.close();
byte[] imageData = baos.toByteArray();
Bitmap bi = Bitmap.createBitmapFromBytes(imageData, 0, imageData.length, 1);
//Bitmap bi = Bitmap.createBitmapFromBytes(imageData, 0, -1, 1);
return bi;
}
The only thing that comes to mind is the imageData.lenght (in the response, the content length is: 6005 ), but I really can't figure this one out.
Any help is more than welcome...
try this way:
InputStream input = httpConn.openInputStream();
byte[] xmlBytes = new byte[256];
int len = 0;
int size = 0;
StringBuffer raw = new StringBuffer();
while (-1 != (len = input.read(xmlBytes)))
{
raw.append(new String(xmlBytes, 0, len));
size += len;
}
value = raw.toString();
byte[] dataArray = value.getBytes();
EncodedImage bitmap;
bitmap = EncodedImage.createEncodedImage(dataArray, 0,dataArray.length);
final Bitmap googleImage = bitmap.getBitmap();
Swati's answer is good. This same thing can be accomplished with many fewer lines of code:
InputStream input = httpConn.openInputStream();
byte[] dataArray = net.rim.device.api.io.IOUtilities.streamToBytes(input);
Bitmap googleImage = Bitmap.createBitmapFromBytes(dataArray, 0, -1, 1);
I use the following code to retrieve image from the phone or SDCard and I use that image in to my ListField. It gives the output but it takes very Long time to produce the screen. How to solve this problem ?? Can any one help me?? Thanks in advance!!!
String text = fileholder.getFileName();
try{
String path="file:///"+fileholder.getPath()+text;
//path=”file:///SDCard/BlackBerry/pictures/image.bmp”
InputStream inputStream = null;
//Get File Connection
FileConnection fileConnection = (FileConnection) Connector.open(path);
inputStream = fileConnection.openInputStream();
ByteArrayOutputStream baos = new ByteArrayOutputStream();
int j = 0;
while((j=inputStream.read()) != -1) {
baos.write(j);
}
byte data[] = baos.toByteArray();
inputStream.close();
fileConnection.close();
//Encode and Resize image
EncodedImage eImage = EncodedImage.createEncodedImage(data,0,data.length);
int scaleFactorX = Fixed32.div(Fixed32.toFP(eImage.getWidth()),
Fixed32.toFP(180));
int scaleFactorY = Fixed32.div(Fixed32.toFP(eImage.getHeight()),
Fixed32.toFP(180));
eImage=eImage.scaleImage32(scaleFactorX, scaleFactorY);
Bitmap bitmapImage = eImage.getBitmap();
graphics.drawBitmap(0, y+1, 40, 40,bitmapImage, 0, 0);
graphics.drawText(text, 25, y,0,width);
}
catch(Exception e){}
You should read files once (on App start or before screen open, maybe put a progress dialog there), put images in array and use this array in paint.