Xamarin Cam2 IOnImageAvailableListener's OnImageAvailable called twice causing - xamarin

UPDATE: The initial question has been answered as to why the crashes happen but the lingering problem remains of why is the 'OnImageAvailable' callback called so may times? When it is called, I want to do stuff with the image, but whatever method I run at that time is called many times. Is this the wrong place to be using the resulting image?
I am using the sample code found here for a Xamarin Android implementation of the Android Camera2 API. My issue is that when the capture button is pressed a single time, the OnCameraAvalibleListener's OnImageAvailable callback gets called multiple times.
This is causing a problem because the image from AcquireNextImage needs to be closed before another can be used, but close is not called until the Run method of the ImageSaver class as seen below.
This causes these 2 errors:
Unable to acquire a buffer item, very likely client tried to acquire
more than maxImages buffers
AND
maxImages (2) has already been acquired, call #close before acquiring
more.
The max image is set to 2 by default, but setting it to 1 does not help. How do I prevent the callback from being called twice?
public void OnImageAvailable(ImageReader reader)
{
var image = reader.AcquireNextImage();
owner.mBackgroundHandler.Post(new ImageSaver(image, file));
}
// Saves a JPEG {#link Image} into the specified {#link File}.
private class ImageSaver : Java.Lang.Object, IRunnable
{
// The JPEG image
private Image mImage;
// The file we save the image into.
private File mFile;
public ImageSaver(Image image, File file)
{
if (image == null)
throw new System.ArgumentNullException("image");
if (file == null)
throw new System.ArgumentNullException("file");
mImage = image;
mFile = file;
}
public void Run()
{
ByteBuffer buffer = mImage.GetPlanes()[0].Buffer;
byte[] bytes = new byte[buffer.Remaining()];
buffer.Get(bytes);
using (var output = new FileOutputStream(mFile))
{
try
{
output.Write(bytes);
}
catch (IOException e)
{
e.PrintStackTrace();
}
finally
{
mImage.Close();
}
}
}
}

The method OnImageAvailable can be called again as soon as you leave it if there is another picture in the pipeline.
I would recommend calling Close in the same method you are calling AcquireNextImage. So, if you choose to get the image directly from that callback, then you have to call Close in there as well.
One solution involved grabbing the image in that method and close it right away.
public void OnImageAvailable(ImageReader reader)
{
var image = reader.AcquireNextImage();
try
{
ByteBuffer buffer = mImage.GetPlanes()[0].Buffer;
byte[] bytes = new byte[buffer.Remaining()];
buffer.Get(bytes);
// I am not sure where you get the file instance but it is not important.
owner.mBackgroundHandler.Post(new ImageSaver(bytes, file));
}
finally
{
image.Close();
}
}
The ImageSaver would be modified to accept the byte array as first parameter in the constructor:
public ImageSaver(byte[] bytes, File file)
{
if (bytes == null)
throw new System.ArgumentNullException("bytes");
if (file == null)
throw new System.ArgumentNullException("file");
mBytes = bytes;
mFile = file;
}
The major downside of this solution is the risk of putting a lot of pressure on the memory as you basically save the images in memory until they are processed, one after another.
Another solution consists in acquiring the image on the background thread instead.
public void OnImageAvailable(ImageReader reader)
{
// Again, I am not sure where you get the file instance but it is not important.
owner.mBackgroundHandler.Post(new ImageSaver(reader, file));
}
This solution is less intensive on the memory; but you might have to increase the maximum number of images from 2 to something higher depending on your needs. Again, the ImageSaver's constructor needs to be modified to accept an ImageReader as a parameter:
public ImageSaver(ImageReader imageReader, File file)
{
if (imageReader == null)
throw new System.ArgumentNullException("imageReader");
if (file == null)
throw new System.ArgumentNullException("file");
mImageReader = imageReader;
mFile = file;
}
Now the Run method would have the responsibility of acquiring and releasing the Image:
public void Run()
{
Image image = mImageReader.AcquireNextImage();
try
{
ByteBuffer buffer = image.GetPlanes()[0].Buffer;
byte[] bytes = new byte[buffer.Remaining()];
buffer.Get(bytes);
using (var output = new FileOutputStream(mFile))
{
try
{
output.Write(bytes);
}
catch (IOException e)
{
e.PrintStackTrace();
}
}
}
finally
{
image?.Close();
}
}

I too facing this issue for longer time and tried implementing #kzrytof's solution but didn't helped well as expected but found the way to get the onImageAvailable to execute once.,
Scenario: When the image is available then the onImageAvailable method is called right?
so, What I did is after closing the image using image.close(); I called the imagereader.setonImageAvailableListener() and made the listener = null. this way I stopped the execution for second time.,
I know, that your question is for xamarin and my below code is in native android java but the method and functionalities are same, so try once:
#Override
public void onImageAvailable(ImageReader reader) {
final Image image=imageReader.acquireLatestImage();
try {
if (image != null) {
Image.Plane[] planes = image.getPlanes();
ByteBuffer buffer = planes[0].getBuffer();
int pixelStride = planes[0].getPixelStride();
int rowStride = planes[0].getRowStride();
int rowPadding = rowStride - pixelStride * width;
int bitmapWidth = width + rowPadding / pixelStride;
if (latestBitmap == null ||
latestBitmap.getWidth() != bitmapWidth ||
latestBitmap.getHeight() != height) {
if (latestBitmap != null) {
latestBitmap.recycle();
}
}
latestBitmap.copyPixelsFromBuffer(buffer);
}
}
catch(Exception e){
}
finally{
image.close();
imageReader.setOnImageAvailableListener(null, svc.getHandler());
}
// next steps to save the image
}

Related

How do I get the camera2 api to work a second time?

I'm using Xamarin/Android (not Forms), trying to integrate the camera2basic api sample into my project.
https://developer.xamarin.com/samples/monodroid/android5.0/Camera2Basic/
I changed nothing in the sample, and I am only interested in using the main camera and only taking a snapshot.
My project has a MainActivity and the camera2 is one of its Fragments that I'm calling like this:
string fragmentTag = this.Resources.GetString(Resource.String.camera_form);
// Begin the transaction
FragmentTransaction trans = this.FragmentManager.BeginTransaction();
// Replace the old fragment with the new one.
trans.Add(Resource.Id.fragment_container, camera2BasicFragment, fragmentTag);
// Add the transaction to the back stack.
// The tag is added so we can use PopBackStack to skip a screen on the back key
trans.AddToBackStack(fragmentTag);
// Don't forget to commit
trans.Commit();
Everything works the first time. It takes a photo and saves it to a folder.
The second time I run it, it shows a preview, then when I take a photo it crashes here, where the throw is:
public void CaptureStillPicture()
{
try
{
var activity = Activity;
if (null == activity || null == mCameraDevice)
{
return;
}
// This is the CaptureRequest.Builder that we use to take a picture.
if (stillCaptureBuilder == null)
stillCaptureBuilder = mCameraDevice.CreateCaptureRequest(CameraTemplate.StillCapture);
stillCaptureBuilder.AddTarget(mImageReader.Surface);
// Use the same AE and AF modes as the preview.
stillCaptureBuilder.Set(CaptureRequest.ControlAfMode, (int)ControlAFMode.ContinuousPicture);
SetAutoFlash(stillCaptureBuilder);
// Orientation
int rotation = (int)activity.WindowManager.DefaultDisplay.Rotation;
stillCaptureBuilder.Set(CaptureRequest.JpegOrientation, GetOrientation(rotation));
mCaptureSession.StopRepeating();
try
{
mCaptureSession.Capture(stillCaptureBuilder.Build(), new CameraCaptureStillPictureSessionCallback(this), null);
}
catch (System.Exception e)
{
throw;
}
}
catch (CameraAccessException e)
{
e.PrintStackTrace();
}
}
With this error:
{Java.Lang.IllegalArgumentException: CaptureRequest contains unconfigured Input/Output Surface!
at Java.Interop.JniEnvironment+InstanceMethods.CallIntMethod (Java.Interop.JniObjectReference instance, Java.Interop.JniMethodInfo method, Java.Interop.JniArgumentValue* args) [0x00069] in <286213b9e14c442ba8d8d94cc9dbec8e>:0
at Java.Interop.JniPeerMembers+JniInstanceMethods.InvokeAbstractInt32Method (System.String encodedMember, Java.Interop.IJavaPeerable self, Java.Interop.JniArgumentValue* parameters) [0x00014] in <286213b9e14c442ba8d8d94cc9dbec8e>:0
at Android.Hardware.Camera2.CameraCaptureSessionInvoker.Capture (Android.Hardware.Camera2.CaptureRequest request, Android.Hardware.Camera2.CameraCaptureSession+CaptureCallback listener, Android.OS.Handler handler) [0x00078] in <b781ed64f1d743e7881ac038e0fbdf85>:0
at RvsMobileApp.Activities.Camera2BasicFragment.CaptureStillPicture () [0x000b7] in C:\Source\RVS\rvs-mobile-app\src\Rvs.Mobile.App\Activities\Camera2BasicFragment.cs:807
--- End of managed Java.Lang.IllegalArgumentException stack trace ---
java.lang.IllegalArgumentException: CaptureRequest contains unconfigured Input/Output Surface!
at android.hardware.camera2.CaptureRequest.convertSurfaceToStreamId(CaptureRequest.java:674)
at android.hardware.camera2.impl.CameraDeviceImpl.submitCaptureRequest(CameraDeviceImpl.java:1066)
at android.hardware.camera2.impl.CameraDeviceImpl.capture(CameraDeviceImpl.java:936)
at android.hardware.camera2.impl.CameraCaptureSessionImpl.capture(CameraCaptureSessionImpl.java:173)
at md5bbb797339b35f7667da89d6634e22c37.CameraCaptureListener.n_onCaptureCompleted(Native Method)
at md5bbb797339b35f7667da89d6634e22c37.CameraCaptureListener.onCaptureCompleted(CameraCaptureListener.java:37)
at android.hardware.camera2.impl.CameraCaptureSessionImpl$1.lambda$onCaptureCompleted$3(CameraCaptureSessionImpl.java:640)
at android.hardware.camera2.impl.-$$Lambda$CameraCaptureSessionImpl$1$OA1Yz_YgzMO8qcV8esRjyt7ykp4.run(Unknown Source:8)
at android.os.Handler.handleCallback(Handler.java:873)
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loop(Looper.java:214)
at android.os.HandlerThread.run(HandlerThread.java:65)
}
base: {Java.Lang.RuntimeException}
JniPeerMembers: {Android.Runtime.XAPeerMembers}
At first I thought it was a memory leak, so I made sure my fragment was killing itself off. Here is how I end the fragment when the finished button is pressed:
case Resource.Id.camera_finished:
// EventHandler<DialogClickEventArgs> nullHandler = null;
Activity activity = Activity;
if (activity != null)
{
// Send all of the data to the service
// SendPhotosAndDataToService();
// Call the paren activitity's back to END this Fragment
activity.FragmentManager.BeginTransaction().Remove(this).CommitNow();
//activity.OnBackPressed();
}
break;
Here are my steps to reproduce the error:
Start the Camera (load the fragment)
See a preview
Take a photo
Return to Main Activity (close fragment)
Start the Camera (load the fragment)
See a preview
Take a photo CRASH!!!
As long as I don't take any photos, I can load and unload the fragment as many times as I need to.
I Googled "CaptureRequest contains unconfigured Input/Output Surface!", and didn't get enough information to really understand the problem.
I think something is not cleaning itself up after the first run.
I've been working on this issue for days now.
As Alex Cohn pointed out, and I found when I read this article:
https://hofmadresu.com/2018/09/11/android-camera2-trials-and-tribulations.html
Which is an excellent resource BTW, the sample code was not releasing the stillCaptureBuilder so it can be used the second time.
public void CaptureStillPicture()
{
try
{
var activity = Activity;
if (null == activity || null == mCameraDevice)
{
return;
}
// THIS WAS NOT RELEASING THE RESOURCES AND SHOULD BE REMOVED FROM THE SAMPLE!
//// This is the CaptureRequest.Builder that we use to take a picture.
////if (stillCaptureBuilder == null)
//// stillCaptureBuilder = mCameraDevice.CreateCaptureRequest(CameraTemplate.StillCapture);
// This is the proper code
var stillCaptureBuilder = mCameraDevice.CreateCaptureRequest(CameraTemplate.StillCapture);
stillCaptureBuilder.AddTarget(mImageReader.Surface);
// Use the same AE and AF modes as the preview.
stillCaptureBuilder.Set(CaptureRequest.ControlAfMode, (int)ControlAFMode.ContinuousPicture);
SetAutoFlash(stillCaptureBuilder);
// Orientation
int rotation = (int)activity.WindowManager.DefaultDisplay.Rotation;
stillCaptureBuilder.Set(CaptureRequest.JpegOrientation, GetOrientation(rotation));
mCaptureSession.StopRepeating();
mCaptureSession.AbortCaptures();
try
{
mCaptureSession.Capture(stillCaptureBuilder.Build(), new CameraCaptureStillPictureSessionCallback(this), null);
}
catch (System.Exception e)
{
throw;
}
}
catch (CameraAccessException e)
{
e.PrintStackTrace();
}
}
I am documenting this so anyone else struggling with camera2 can learn.
Thank you as always
Have stillCaptureBuilder re-initialized when you restore the camera fragment. Even better, make sure you clean the stillCaptureBuilder up when the fragment is destroyed.

How to speed up time when using Java Mail to save attachments?

I separate Message msg into Multipart multi1 = (Multipart) msg.getContent().
And a mail attachment is in one BodyPart, Part part = multi1.getBodyPart(i);
Then I want to save the attachment.
private void saveFile(String fileName, InputStream in) throws IOException {
File file = new File(fileName);
if (!file.exists()) {
OutputStream out = null;
try {
out = new BufferedOutputStream(new FileOutputStream(file));
in = new BufferedInputStream(in);
byte[] buf = new byte[BUFFSIZE];
int len;
while ((len = in.read(buf)) > 0) {
out.write(buf, 0, len);
}
} catch (FileNotFoundException e) {
LOG.error(e.toString());
} finally {
// close streams
if (in != null) {
in.close();
}
if (out != null) {
out.close();
}
}
}
But it cost too much time on reading IO Stream. For example,a 2.7M file needs almost 160 seconds to save on the disk. I have already tried Channel and some other IO Stream, but nothing changed. Any solution for saving attachment using Java Mail?
For more code information https://github.com/cainzhong/java-mail-demo/blob/master/src/main/java/com/java/mail/impl/ReceiveMailImpl.java
Actually, mail.imaps.partialfetch takes effect and speeds up a lot. There is a mistake for my previous code.
props.put("mail.imap.partialfetch","false");
props.put("mail.imap.fetchsize", "1048576");
props.put("mail.imaps.partialfetch", "false");
props.put("mail.imaps.fetchsize", "1048576");
instead of
props.put("mail.imap.partialfetch",false);
props.put("mail.imap.fetchsize", "1048576");
props.put("mail.imaps.partialfetch", false);
props.put("mail.imaps.fetchsize", "1048576");
It is important to put a quotation mark on "false". If not, the parameters will not take effects.
Anyway, thanks to Bill Shannon.
There's two key parts to this operation - reading the data from your mail server and writing the data to your filesystem. Most likely it's the speed of the server and the network connection to the server that's controlling the overall speed of the operation. You can try setting the mail.imap.fetchsize and mail.imap.partialfetch properties to see if that improves performance.
You can also try using something like NullOutputStream instead of FileOutputStream to measure only the speed of reading the data.

Bitmap Clones -> PictureBox = InvalidOperationException, "Object is currently in use elsewhere", red cross (Windows Forms)

I'm aware there are a lot of questions on this topic, and I've looked through most of them as well as Googling to help me solve this problem, to no avail.
What I'd like to do is post the relevant sections of my code that produce bitmaps and render bitmaps onto PictureBoxes in my UI, and I would like to know if anyone can spot what is specifically causing this error, and can suggest how to avoid or bypass it.
I'll start with the relevant bits (3) in my VideoRenderer class:
The timer event that continously calls MoveFrameToBitmap while video is running:
private void TimerTick(object sender, EventArgs e)
{
if (frameTransport.IsNewFrameAvailable())
{
if (frameTransport.GetFrame())
{
if (MoveFrameToBitmap())
{
double msSinceLastFrame = (Int32)DateTime.Now.Subtract(lastFrameTimestamp).TotalMilliseconds;
fps = 1000 / msSinceLastFrame;
lastFrameTimestamp = DateTime.Now;
}
}
else
{
if (frameTransport.channelKeyBufferBufidMismatch)
{
needsRestart = true;
}
}
}
}
MoveFrameToBitmap, which marshals in a video frame from FrameTransport, creates a bitmap if successful, clones it and queues the frame:
internal bool MoveFrameToBitmap()
{
bool result = false;
try
{
if (frameTransport.bitmapDataSize == 0)
{
return false;
}
bool ResolutionHasChanged = ((videoWidth != frameTransport.width) | (videoHeight != frameTransport.height));
videoHeight = frameTransport.height;
videoWidth = frameTransport.width;
Bitmap bitmap = new System.Drawing.Bitmap(videoWidth, videoHeight);
Rectangle rectangle = new System.Drawing.Rectangle(0, 0, videoWidth, videoHeight);
BitmapData bitmapData = new System.Drawing.Imaging.BitmapData();
bitmapData.Width = videoWidth;
bitmapData.Height = videoHeight;
bitmapData.PixelFormat = PixelFormat.Format24bppRgb;
bitmap.LockBits(rectangle, ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb, bitmapData);
Marshal.Copy(frameTransport.bitmapData, 0, bitmapData.Scan0, frameTransport.bitmapDataSize);
lock (frameQueueLock)
{
if (frameQueue.Count == 0)
{
frameQueue.Enqueue(bitmap.Clone());
}
}
bitmap.UnlockBits(bitmapData);
if (ResolutionHasChanged) skypeRef.events.FireOnVideoResolutionChanged(this, new RootEvents.OnVideoResolutionChangedArgs(videoWidth, videoHeight));
bitmap.Dispose();
result = true;
}
catch (Exception) { }
GC.Collect();
return result;
}
The property that exposes the queued frame, which can be safely accessed even when a frame is not currently queued:
public Bitmap QueuedFrame
{
get
{
try
{
lock (frameQueueLock)
{
return frameQueue.Dequeue() as Bitmap;
}
}
catch (Exception)
{
return null;
}
}
}
That's all for VideoRenderer. Now I'll show the relevant property of the static MyVideo class, which encapsulates, controls and returns frames from two video renderers. Here is the property which exposes the queued frame of the first renderer (every call to videoPreviewRenderer is locked with VPR_Lock):
public static Bitmap QueuedVideoPreviewFrame
{
get
{
lock (VPR_Lock) { return videoPreviewRenderer.QueuedFrame; }
}
}
The property for the second renderer is the same, except with its own lock object.
Moving on, here is the thread and its call in my UI that accesses the two queued frame properties in MyVideo and renders frames to the two PictureBoxes:
ThreadStart renderVCONF3501VideoThreadStart = new ThreadStart(new Action(() =>
{
while (MyAccount.IsLoggedIn)
{
if (MyVideo.VideoPreviewIsRendering)
{
if (MyVideo.VideoPreviewRenderer.NeedsRestart)
{
MyVideo.VideoPreviewRenderer.Stop();
MyVideo.VideoPreviewRenderer.Start();
MyVideo.VideoPreviewRenderer.NeedsRestart = false;
}
else
{
try
{
Bitmap newVideoPreviewFrame = MyVideo.QueuedVideoPreviewFrame;
if (newVideoPreviewFrame != null)
{
lock (VCONF3501_VPI_Lock)
{
VCONF3501_VideoPreview.Image = newVideoPreviewFrame;
}
}
}
catch (Exception) { continue; }
}
}
else
{
lock (VCONF3501_VPI_Lock)
{
VCONF3501_VideoPreview.Image = null;
}
}
if (MyVideo.LiveSessionParticipantVideoIsRendering)
{
if (MyVideo.LiveSessionParticipantVideoRenderer.NeedsRestart)
{
MyVideo.LiveSessionParticipantVideoRenderer.Stop();
MyVideo.LiveSessionParticipantVideoRenderer.Start();
MyVideo.LiveSessionParticipantVideoRenderer.NeedsRestart = false;
}
else
{
try
{
Bitmap newLiveSessionParticipantVideoFrame = MyVideo.QueuedLiveSessionParticipantVideoFrame;
if (newLiveSessionParticipantVideoFrame != null)
{
lock (VCONF3501_LSPVI_Lock)
{
VCONF3501_Video.Image = newLiveSessionParticipantVideoFrame;
}
}
}
catch (Exception) { continue; }
}
}
else
{
lock (VCONF3501_LSPVI_Lock)
{
VCONF3501_Video.Image = null;
}
}
GC.Collect();
}
}));
new Thread(renderVCONF3501VideoThreadStart).Start();
The GC.Collect() calls are to force bitmap memory release, as there was a memory leak (and still might be one--the cloned bitmaps aren't being manually disposed of and I'm not sure where to, at this point).
Where is the InvalidOperationException in System.Drawing, which causes a red cross to be drawn to the PictureBox coming from, what am I doing wrong in terms of locking and access, and how can I avoid/bypass this error?
I am trying to bypass it with the catch exception and continue logic in the thread, and that I have confirmed works . . . sometimes. At other times, the failed draw attempt seems to complete too far and draws the red cross anyway, and after that point, the PictureBox is thoroughly unresponsive and new frames cannot be drawn to it, even when the video is still running fine.
Perhaps there is a way to refresh the PictureBox so that it accepts new frames?
I was having a problem with the red cross, then I find this and it help me, I hope it helps you too:
WinForms controls and the red X

GLSurfaceView.Renderer crashes when resuming because "bitmap is recycled"

once again I need some help:
yesterday I asked this question that was about the way to use a large jpg image as a Bitmap (http://stackoverflow.com/questions/13511657/problems-with-big-drawable-jpg-image) and I resolved myself (Is my own response on that question) but whenever I resume my activity, as it uses that bitmap as the GLRenderer texture it crashes. I've tried many things, the last try was to make that bitmap static in order to keep it as a member variable into the activity but it crashes because, I supose, it looses it's mBuffer.
More details on the Activity code:
I declared it as SingletonInstance into the manifest:
android:launchMode="singleInstance"
in order to keep the tiles for the renderer.
and here some code:
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
mGLSurfaceView = new GLSurfaceView(this);
mGLSurfaceView.setEGLConfigChooser(true);
mSimpleRenderer = new GLRenderer(this);
getTextures();
if (!mIsTileMapInitialized){
tileMap = new LandSquareGrid(1, 1, mHeightmap, mLightmap, false, true, true, 128, true);
tileMap.setupSkybox(mSkyboxBitmap, true);
mIsTileMapInitialized = true;
}
initializeRenderer();
mGLSurfaceView.setRenderer(mSimpleRenderer);
setContentView( R.layout.game_layout );
setOnTouchListener();
initializeGestureDetector();
myCompassView = (MyCompassView)findViewById(R.id.mycompassview);
// Once set the content view we can set the TextViews:
coordinatesText = (TextView) findViewById(R.id.coordDynamicText);
altitudeText = (TextView) findViewById(R.id.altDynamicText);
directionText = (TextView) findViewById(R.id.dirDynamicText);
//if (!mIsGLInitialized){
mOpenGLLayout = (LinearLayout)findViewById(R.id.openGLLayout);
mOpenGLLayout.addView(mGLSurfaceView);
mVirtual3DMap = new Virtual3DMap(mSimpleRenderer, tileMap);
if (mGameThread == null){
mGameThread = new Thread(mVirtual3DMap);
mGameThread.start();
}
}
On getTextures method I get few small textures and the largest one as in my last question self response:
if (mTerrainBitmap==null){
InputStream is = getResources().openRawResource(R.drawable.terrain);
try {
// Set terrain bitmap options to 16-bit, 565 format.
terrainBitmapOptions.inPreferredConfig = Bitmap.Config.RGB_565;
Bitmap auxBitmap = BitmapFactory.decodeStream(is, null, terrainBitmapOptions);
mTerrainBitmap = Bitmap.createBitmap(auxBitmap);
}
catch (Exception e){
}
finally {
try {
is.close();
}
catch (IOException e) {
// Ignore.
}
}
}
So, again, first time it works great but when I go back I do:
protected void onPause() {
super.onPause();
mGLSurfaceView.onPause();
}
#Override
protected void onStop() {
// TODO Auto-generated method stub
super.onStop();
if (mVirtual3DMap != null) {
try {
mVirtual3DMap.cancel();
mGameThread=null;
mVirtual3DMap = null;
mGLSurfaceView.destroyDrawingCache();
mSimpleRenderer=null;
System.gc();
} catch (Throwable e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
And whan I resume the activity:
#Override
protected void onResume() {
super.onResume();
mGLSurfaceView.onResume();
if (mVirtual3DMap != null) {
try {
mVirtual3DMap.resume();
} catch (Throwable e) {
e.printStackTrace();
}
}
}
And it crashes.
Why?? Ok, here is the exception cause on the GLThread:
java.lang.IllegalArgumentException: bitmap is recycled...
I tried this messy stuff because launching more than two times the original activity the application crashes bacuse of this or because of the amount of memory used and now I don't know if revert all these changes or what todo with this.
Is there a good way to keep in memory and usable, by this or another application activity, this bitmap?
Please, I need your advices.
Thanks in advance.
Do not handle resources manually or your app's surface will broke up. You can't handle your resources manually.
If you worry about reloading resources and you use API level 11+, you can use setPreserveEGLContextOnPause(). It will perserve your textures and FBOs.
If you can't use API 11+, you can port GLSurfaceView() to your app. You can check my own GLSurfaceView that is ported from ICS.
PS: Sorry about my poor english.
No. Let Android handle all the resources. You must handle the appropriate events and reload the bitmap when the activity is resumed. You cannot expect, that any OpenGL handles are still valid after the activity has been resumed.
Think of it as in the example of a laptop coming out from hibernation. Although all memory has been restored, you cannot expect that any open socket has still a real active connection going.
I am an Android noobie, so please correct me if I am wrong.

xuggler: no video in encoded 3gp file

i am trying to encode videos into 3gp format using xuggler, i somehow got it to work, work as in the program stopped throwing errors and exceptions, but the new file that is created does not have any video. Now there is no error or exception for me to work with so i have stuck a wall.
EDIT: Note the audio is working as it shud.
This is the code for the main function where the listeners are configured
IMediaReader reader = ToolFactory.makeReader("/home/hp/mms/b.flv");
IMediaWriter writer = ToolFactory.makeWriter("/home/hp/mms/xuggle/a_converted.3gp", reader);
IMediaDebugListener debugListener = ToolFactory.makeDebugListener();
writer.addListener(debugListener);
ConvertVideo convertor = new ConvertVideo(new File("/home/hp/mms/b.flv"), new File("/home/hp/mms/xuggle/a_converted.3gp"));
// convertor.addListener(writer);
reader.addListener(writer);
writer.addListener(convertor);
while (reader.readPacket() == null)
;
And this is the code for the convertor that i wrote.
public ConvertVideo(File inputFile, File outputFile)
{
this.outputFile = outputFile;
reader = ToolFactory.makeReader(inputFile.getAbsolutePath());
reader.addListener(this);
}
private IVideoResampler videoResampler = null;
private IAudioResampler audioResampler = null;
#Override
public void onAddStream(IAddStreamEvent event)
{
if (writer == null)
{
writer = ToolFactory.makeWriter(outputFile.getAbsolutePath(), reader);
}
int streamIndex = event.getStreamIndex();
IStreamCoder streamCoder = event.getSource().getContainer().getStream(streamIndex).getStreamCoder();
if (streamCoder.getCodecType() == ICodec.Type.CODEC_TYPE_AUDIO)
{
streamCoder.setFlag(IStreamCoder.Flags.FLAG_QSCALE, false);
writer.addAudioStream(streamIndex, 0, 1, 8000);
}
else if (streamCoder.getCodecType() == ICodec.Type.CODEC_TYPE_VIDEO)
{
streamCoder.setFlag(IStreamCoder.Flags.FLAG_QSCALE, false);
streamCoder.setCodec(ICodec.findEncodingCodecByName("h263"));
writer.addVideoStream(streamIndex, 0, VIDEO_WIDTH, VIDEO_HEIGHT);
}
super.onAddStream(event);
}
// //
#Override
public void onVideoPicture(IVideoPictureEvent event)
{
IVideoPicture pic = event.getPicture();
if (videoResampler == null)
{
videoResampler = IVideoResampler.make(VIDEO_WIDTH, VIDEO_HEIGHT, pic.getPixelType(), pic.getWidth(), pic.getHeight(), pic.getPixelType());
}
IVideoPicture out = IVideoPicture.make(pic.getPixelType(), VIDEO_WIDTH, VIDEO_HEIGHT);
videoResampler.resample(out, pic);
IVideoPictureEvent asc = new VideoPictureEvent(event.getSource(), out, event.getStreamIndex());
super.onVideoPicture(asc);
out.delete();
}
#Override
public void onAudioSamples(IAudioSamplesEvent event)
{
IAudioSamples samples = event.getAudioSamples();
if (audioResampler == null)
{
audioResampler = IAudioResampler.make(1, samples.getChannels(), 8000, samples.getSampleRate());
}
if (event.getAudioSamples().getNumSamples() > 0)
{
IAudioSamples out = IAudioSamples.make(samples.getNumSamples(), samples.getChannels());
audioResampler.resample(out, samples, samples.getNumSamples());
AudioSamplesEvent asc = new AudioSamplesEvent(event.getSource(), out, event.getStreamIndex());
super.onAudioSamples(asc);
out.delete();
}
}
I just cant seem to figure out where the problem is. I wud be thankful if someone wud plz point me in the right direction.
EDIT: If i see the properties of my newly encoded video, its audio properties are set and its video properties are not i.e in video properties, dimension= 0 x 0, frame rate= N/A and codec= h.263. The problem here is the 0 x 0 dimension.
well i found the answer, well not exactly the answer but a way to do what i was doing. Right now i am not quiet sure why my code was not working but the hre u can find the solution that worked for me. the author here just makes a seperate resizer class, adds it as a listener to the reader,. It has the onPictureEvent method overridden. Then he makes another class MyVideoListener and overrides the onAddStream method and adds it as a listener to the writer. and then he links the two parts by adding writer as a listener to resizer. works like a a charm.

Resources