get velocity and calculate acceleration DJI SDK - dji-sdk

Hello i want to get the value of Velocity X in real time , and calculate the acceleration X:
i used the following code as below in the MainActivity.java:
public class MainActivity extends AppCompatActivity {
private static final String TAG = MainActivity.class.getName();
#SuppressLint("SetTextI18n")
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
((Aircraft) DemoApplication.getProductInstance()).getFlightController().setStateCallback(new FlightControllerState.Callback() {
#Override
public void onUpdate(#NonNull FlightControllerState flightControllerState) {
float vx1 = flightControllerState.getVelocityX();
Handler handler = new Handler();
handler.postDelayed(() -> {
//Do something after 100ms
}, 100);
float vx2 = flightControllerState.getVelocityX();
float accelerationVariableX = (10 * vx2) - (10 * vx1);
TextView acc = (TextView) findViewById(R.id.acc);
//String v1 = Float.toString(vx1);
//String v2 = Float.toString(vx2);
acc.setText("accX: " + accelerationVariableX + " m/s²" + "/x1 " + vx1 + "/x2 " + vx2); // acceleration and velocity
}
});
}
}
the application crashed when i open it ,
How to Fix Crashed Apps ?
NB : when i removed this line `
((Aircraft) DemoApplication.getProductInstance()).getFlightController().setStateCallback(new FlightControllerState.Callback() {
#Override
public void onUpdate(#NonNull FlightControllerState flightControllerState) {
`
the app work well

I suspect one of the variables is null, either getProductInstance() or getFlightController().
Are you sure the aircraft is connected and the sdk has competed registering and initialize? These will be null until after the prior completes.

Related

How to create a restart method using JFrame?

recently I started learning Java, I watched a YT video where a programmer used static methods and variables to create a simple guess game using JFrame.
Afterwards I tried to implement a close/restart button, after reading some Threads I relized static methods aren´t made that for. So my question is now how do I solve my problem now. :)
import javax.swing.*;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.util.concurrent.ThreadLocalRandom;
public class main extends JFrame {
JLabel text = new JLabel("Please choose a number between 1 & 10 ");
JLabel textVersuch = new JLabel();
JButton button = new JButton("Try");
int myNumber = ThreadLocalRandom.current().nextInt(1,10+1);
JTextField textField = new JTextField();
int count = 0;
//is there a better way to hide all this information, but still keep them useable for my methods?
public static void main(String[] args) {
JFrame frame = new JFrame();
frame.openUI(); //error occurs
}
//How do I manage to start my method openUI() to start my game?
public void openUI(){
JFrame frame = new JFrame("Program");
frame.setSize(400,400);
frame.setLocation(800,400);
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.setDefaultLookAndFeelDecorated(true);
text.setBounds(0,50,400,25);
textVersuch.setBounds(300,0,100,25);
textField.setBounds(0,150,50,25);
button.setBounds(50,150,100,25);
button.addActionListener(new ActionListener() {
#Override
public void actionPerformed(ActionEvent e) {
try {
String textFromTextfield = textField.getText();
int number = Integer.parseInt(textFromTextfield);
if(number<1 || number>10){
text.setText("Your number has to be between 1 & 10 ");
textField.setText("");
}else{
guess(number);
}
}catch (Exception error){
text.setText("Please enter a digit! ");
textField.setText("");
}
}});
frame.add(button);
frame.add(textField);
frame.add(text);
frame.add(textVersuch);
frame.setLayout(null);
frame.setVisible(true);
}
public void close(JFrame frame){
frame.dispose(); //here I want to close the game
}
public void guess(int number ) throws InterruptedException {
count++;
textVersuch.setText(count + " tries!");
if(number == myNumber){
text.setText("You was right! " + " You tried " + count + " time(s) :)" );
button.setText("Restart");
button.addActionListener(new ActionListener() {
#Override
public void actionPerformed(ActionEvent e) {
//How can I restart my JFrame?
}
});
} else if (count < 3) {
text.setText("Wrong guess! Retry");
if (number < myNumber){
text.setText("Your searched number is bigger than" + number );
}else {
text.setText("Your searched number is lower than" + number );
}
} else {
text.setText("Sorry, you lost the number was " + myNumber);
}
textField.setText("");
}
}

C#: Have my whole program run in the background

I am working on an app that creates a mock location. Now, after I start it - everything seems to work here - and then go into maps, I always get set right to where I actually am - not where my fake coordinates are. So im thinking, this is due to my program immediately stopping as soon as i push it into the background of the android phone im debugging with.
1) Would u say so too?
2) So, how do I get my program to continue mocking the location, even though its in the background? I already set up a timer, that mocks a new location every 5 seconds. Here is my main activity (which happens to be a bit long, excuse me..)
Any help would be AWESOME!
public static double GlobalLongitude = 0.0; // global, cause i need to pull string from void method
public static double GlobalLatitude = 0.0;
static readonly string TAG = "X:" + typeof(Activity1).Name;
Location _currentLocation;
LocationManager _locationManager;
string _locationProvider;
TextView _locationText;
static TextView txtAdded;
static Button btnMain;
protected override void OnCreate(Bundle bundle)
{
base.OnCreate(bundle);
// Set our view from the "main" layout resource
SetContentView(Resource.Layout.Main);
_locationText = FindViewById<TextView>(Resource.Id.GpsTest);
txtAdded = FindViewById<TextView>(Resource.Id.AddedCoordinates);
btnMain = FindViewById<Button>(Resource.Id.startbutton);
CountDown();
InitializeLocationManager();
} // start here! :D
private void CountDown()
{
System.Timers.Timer timer = new System.Timers.Timer();
timer.Interval = 5000;
timer.Elapsed += OnTimedEvent;
timer.Enabled = true;
}
private void OnTimedEvent(object sender, System.Timers.ElapsedEventArgs e) // txt.Added is here!
{
txtAdded.Text = SetMockLocation();
}
public void OnLocationChanged(Location location)
{
string test = "Null";
string test2 = "Null";
bool waitforresult = false;
_currentLocation = location;
if (_currentLocation == null)
{
_locationText.Text = "Unable to determine your location. Try again in a short while.";
}
else
{
_locationText.Text = string.Format("Unchanged: {0:f5} {1:f5}", _currentLocation.Latitude, _currentLocation.Longitude);// hh: 53, 10
//das her wird ausgegeben bei button.click
test = string.Format("{0:f5}", _currentLocation.Latitude); // to format
test2 = string.Format("{0:f5}", _currentLocation.Longitude);
double.TryParse(test, out GlobalLatitude);
double.TryParse(test2, out GlobalLongitude);
if (test != "Null")
{
waitforresult = true;
}
if (waitforresult == true)
{
Add700ToCoordinates();
}
}
} // ausgabe der koordinaten
void InitializeLocationManager()
{
_locationManager = (LocationManager)GetSystemService(LocationService);
Criteria criteriaForLocationService = new Criteria
{
Accuracy = Accuracy.Fine
};
IList<string> acceptableLocationProviders = _locationManager.GetProviders(criteriaForLocationService, true);
if (acceptableLocationProviders.Any())
{
_locationProvider = acceptableLocationProviders.First();
}
else
{
_locationProvider = string.Empty;
}
Log.Debug(TAG, "Using " + _locationProvider + ".");
}
protected override void OnResume()
{
base.OnResume();
_locationManager.RequestLocationUpdates(_locationProvider, 0, 0, this);
Log.Debug(TAG, "Listening for location updates using " + _locationProvider + ".");
}
protected override void OnPause()
{
base.OnPause();
_locationManager.RemoveUpdates(this);
Log.Debug(TAG, "No longer listening for location updates.");
}
public static double Add700ToCoordinates()
{
string xy = "Null";
double FinalCoordinates = (GlobalLatitude + 0.01065);
btnMain.Click += (sender, e) =>
{
xy = FinalCoordinates.ToString();
xy = xy + " " + GlobalLongitude.ToString();
};
return FinalCoordinates;
}
public static string SetMockLocation()
{
var context = Android.App.Application.Context;
var locationManager = context.GetSystemService(LocationService) as LocationManager;
locationManager.AddTestProvider("Test09", false, false, false, false, false, false, false, Power.Low, Android.Hardware.SensorStatus.AccuracyHigh);
locationManager.SetTestProviderEnabled("Test09", true);
var location = new Location("Test09");
location.Latitude = Add700ToCoordinates();
location.Longitude = GlobalLongitude;
location.Accuracy = 0; // ob das geht?... ja, aber was beduetet es?
location.Time = DateTime.Now.Ticks;
location.ElapsedRealtimeNanos = 100; // hier das gleiche... was hießt es? :D
locationManager.SetTestProviderLocation("Test09", location);
//Check if your event reacted the right way
locationManager.RemoveTestProvider("Test09");
return location.Latitude.ToString();
}
}
}
There are probably two things at play here - service and background processing.
You can set the mock locations, probably, as a service that runs in the background. You can do this in the native code.
And if you are using Xamarin or Xamarin Forms you can utilize the MessagingCenter feature to talk/access the service.
You can have native code running services in the background and your PCL/shared code can access from native code information that you need.
You can check on this link for some very helpful example and walkthrough.
First you need to create native implementation for services for each platform.
For Android:
You need to wrap your service into Android Service to have capability work in background. Please see this references https://developer.android.com/guide/components/services.html
https://developer.xamarin.com/guides/android/application_fundamentals/services/
For iOS:
It's little beat harder. First read this reference, especially "Declaring Your App’s Supported Background Tasks" part.(https://developer.apple.com/library/content/documentation/iPhone/Conceptual/iPhoneOSProgrammingGuide/BackgroundExecution/BackgroundExecution.html)
So you can use "Location updates" background mode and inject your mock-generator service into "locations updates" service.
Below example for xamarin iOS:
private void StartAccelerometerUpdates()
{
if (_motionManager.AccelerometerAvailable)
_motionManager.AccelerometerUpdateInterval = ACCEL_UPDATE_INTERVAL;
_motionManager.StartAccelerometerUpdates (NSOperationQueue.MainQueue, AccelerometerDataUpdatedHandler);
}
public void AccelerometerDataUpdatedHandler(CMAccelerometerData data, NSError error)
{ //your mock-generator code }

Android seekbar getting set to 0 on device rotation

I have a very similar problem like
Seekbar 'unhooking' from media player on orientation change, I get the correct output onSaveInstanceState and onCreateView of my progress bar.
I have implemented a media player in a fragment, on device rotation the song is is working fine but the seekbar progress is getting set to 0. I have done the following.
#Override
public void onSaveInstanceState(Bundle savedInstanceState) {
super.onSaveInstanceState(savedInstanceState);
//NOTE: When navigating from one fragment to the next
// Bundle/savedInstanceState is always null
// Implemented it using Shared Preferences.
// Always call the superclass so it can save the view hierarchy state
savedInstanceState.putInt(SEEKBAR_PROGRESS, utils.getProgressPercentage(getCurrentPosition(), getDuration()));
Log.i(LOG_TAG, ">>>>> onSaveInstanceState : " + savedInstanceState.getInt(SEEKBAR_PROGRESS));
}
and onCreateView I am checking the savedInstanceState if it is not null and > 0 I am setting the seekbar progress, but it is not working, can someone please tell me why?
#Override
public View onCreateView(LayoutInflater inflater, ViewGroup container,Bundle savedInstanceState) {
Bundle arguments = getArguments();
if (arguments != null) {
mUri = arguments.getParcelable(TrackPlayerActivityFragment.DETAIL_URI);
}
final View rootView = inflater.inflate(R.layout.fragment_track_player, container, false);
currentTimeTextView = (TextView) rootView.findViewById(R.id.current_time);
totalTimeView = (TextView) rootView.findViewById(R.id.total_time);
playButtonView = (ToggleButton) rootView.findViewById(R.id.media_play);
Cursor cur = getActivity().getContentResolver().query(mUri,null, null, null, null);
mTrackPlayerAdapter = new TrackPlayerAdapter(getActivity(), cur, 0, this);
mListView = (ListView) rootView.findViewById(R.id.listview_player);
mListView.setAdapter(mTrackPlayerAdapter);
//initialize the play button
playButtonView = (ToggleButton) rootView.findViewById(R.id.media_play);
if(savedInstanceState != null && savedInstanceState.getInt(SEEKBAR_PROGRESS) > 0) {
Log.i(LOG_TAG, ">>>>> onCreateView savedInstance : " + savedInstanceState.getInt(SEEKBAR_PROGRESS));
mSpotifyMusicSeekBar.setProgress(savedInstanceState.getInt(SEEKBAR_PROGRESS));
}
return rootView;
}
the play song is a runnable thread which is working till the completion even on device rotation.
public void playSong(String songUrl, String songTitle) {
Log.i(LOG_TAG, ">>>>> Song URL fragment - " + songUrl);
mSpotifyMusicService.setSongURL(songUrl);
mSpotifyMusicService.setSongTitle(songTitle);
mSpotifyMusicService.playSong();
View v = getActivity().findViewById(R.id.listview_player);
mSpotifyMusicSeekBar = (SeekBar) v.findViewById(R.id.musicSeekBar);
new Thread(new Runnable() {
#Override
public void run() {
try {
int progress = 0;
if(startingPoint > 0) {
progress = startingPoint;
}
while (progress <= 100) {
Thread.sleep(100);
final long totalDuration = getDuration();
progress = utils.getProgressPercentage(getCurrentPosition(), totalDuration);
//set the seekbar position, will be used in saved instance later on
mSpotifyMusicSeekBar.setProgress(progress);
}
} catch (InterruptedException e) {
return;
} catch (Exception e) {
return;
}
}
}).start();
//implement the OnSeekBarChangeListener interface methods
mSpotifyMusicSeekBar.setOnSeekBarChangeListener(new SeekBar.OnSeekBarChangeListener() {
#Override
public void onProgressChanged(SeekBar seekBar, int progress, boolean fromUser) {
if (fromUser) {
Log.i(LOG_TAG, ">>>>> User Progress change" + progress);
mSpotifyMusicService.seek(progress);
} else {
updateMediaPlayerControls(
utils.milliSecondsToTimer(getCurrentPosition()),
utils.milliSecondsToTimer(getDuration())
);
//Log.i(LOG_TAG, ">>>>> System progress %age - " + progress);
}
#Override
public void onStartTrackingTouch(SeekBar seekBar) {
Log.i("onStartTrackingTouch - ",
"" + seekBar.getProgress());
}
#Override
public void onStopTrackingTouch(SeekBar seekBar) {
Log.i("onStopTrackingTouch - ",
"" + seekBar.getProgress());
startingPoint = seekBar.getProgress();
mSpotifyMusicService.seek(startingPoint);
}
});
}
The way I solved it was to have the seekbar outside of the custom adapter and made it part of the fragment, and then used onSaveInstanceState to get the percentage and used it onCreateView after checking if the saved instance bundle is not null.

Using the onFrameAvailable() in Jacobi Google Tango API

Question: Does anyone know how to get the Tango's color camera image buffer using the Tango Java (Jacobi) API onFrameAvailable() callback?
Background:
I have an augmented reality application that displays video in the background of the Tango. I've successfully created the video overlay example using the the Java API (Jacobi) following this example. My application works fine, and the video is rendered in the background properly.
As part of the application, I'd like to store a copy of the video backbuffer when the user presses a button. Therefore, I need access to the camera's RGB data.
According to the Jacobi release notes, any class desiring access to the camera RGB data should implement the new onFrameAvailable() method in the OnTangoUpdateListener. I did this, but I don't see any handle or arguments to actually get the pixels:
Java API
#Override
public void onFrameAvailable(int cameraId) {
//Log.w(TAG, "Frame available!");
if (cameraId == TangoCameraIntrinsics.TANGO_CAMERA_COLOR) {
tangoCameraPreview.onFrameAvailable();
}
}
as shown, onFrameAvailable only has one argument, and integer designating the id of the camera generating the view. Contrast this with the C-library call back, which provides access to the image buffer:
C API
TangoErrorType TangoService_connectOnFrameAvailable(
TangoCameraId id, void* context,
void (*onFrameAvailable)(void* context, TangoCameraId id,
const TangoImageBuffer* buffer));
I was expecting the Java method to have something similar to the buffer object in the C API call.
What I've Tried
I tried extending the TangoCameraPreview class and saving the image there, but I only get a black background.
public class CameraSurfaceView extends TangoCameraPreview {
private boolean takeSnapShot = false;
public void takeSnapShot() {
takeSnapShot = true;
}
/**
* Grabs a copy of the surface (which is rendering the Tango color camera)
* https://stackoverflow.com/questions/14620055/how-to-take-a-screenshot-of-androids-surface-view
*/
public void screenGrab2(){
int width = this.getWidth();
int height = this.getHeight();
long fileprefix = System.currentTimeMillis();
View v= getRootView();
v.setDrawingCacheEnabled(true);
// this is the important code :)
// Without it the view will have a dimension of 0,0 and the bitmap will be null
v.measure(MeasureSpec.makeMeasureSpec(0, MeasureSpec.UNSPECIFIED),
MeasureSpec.makeMeasureSpec(0, MeasureSpec.UNSPECIFIED));
v.layout(0, 0, width, height);
v.buildDrawingCache(true);
Bitmap image = v.getDrawingCache();
//TODO: make seperate subdirctories for each exploitation sessions
String targetPath =Environment.getExternalStorageDirectory() + "/RavenEye/Photos/";
String imageFileName = fileprefix + ".jpg";
if(!(new File(targetPath)).exists()) {
new File(targetPath).mkdirs();
}
try {
File targetDirectory = new File(targetPath);
File photo=new File(targetDirectory, imageFileName);
FileOutputStream fos=new FileOutputStream(photo.getPath());
image.compress(CompressFormat.JPEG, 100, fos);
fos.flush();
fos.close();
Log.i(this.getClass().getCanonicalName(), "Grabbed an image in target path:" + targetPath);
} catch (FileNotFoundException e) {
Log.e(CameraPreview.class.getName(),"Exception " + e);
e.printStackTrace();
} catch (IOException e) {
Log.e(CameraPreview.class.getName(),"Exception " + e);
e.printStackTrace();
}
}
/**
* Grabs a copy of the surface (which is rendering the Tango color camera)
*/
public void screenGrab(){
int width = this.getWidth();
int height = this.getHeight();
long fileprefix = System.currentTimeMillis();
Bitmap image = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
Canvas canvas = new Canvas(image);
canvas.drawBitmap(image, 0, 0, null);
//TODO: make seperate subdirctories for each exploitation sessions
String targetPath =Environment.getExternalStorageDirectory() + "/RavenEye/Photos/";
String imageFileName = fileprefix + ".jpg";
if(!(new File(targetPath)).exists()) {
new File(targetPath).mkdirs();
}
try {
File targetDirectory = new File(targetPath);
File photo=new File(targetDirectory, imageFileName);
FileOutputStream fos=new FileOutputStream(photo.getPath());
image.compress(CompressFormat.JPEG, 100, fos);
fos.flush();
fos.close();
Log.i(this.getClass().getCanonicalName(), "Grabbed an image in target path:" + targetPath);
} catch (FileNotFoundException e) {
Log.e(CameraPreview.class.getName(),"Exception " + e);
e.printStackTrace();
} catch (IOException e) {
Log.e(CameraPreview.class.getName(),"Exception " + e);
e.printStackTrace();
}
}
#Override
public void onFrameAvailable() {
super.onFrameAvailable();
if(takeSnapShot) {
screenGrab();
takeSnapShot = false;
}
}
public CameraSurfaceView(Context context) {
super(context);
// TODO Auto-generated constructor stub
}
}
Where I'm Heading
I'm preparing to root the device, and then using the onFrameAvailable method to cue an external root process such as one of these:
post 23610900
post 10965409
post 4998527
I'm hoping I can find a way to avoid the root hack.
Thank you in advance!
OK, I figured out a way to make it work.
Update: My working solution is here:
https://github.com/stevehenderson/GoogleTango_AR_VideoCapture
I essentially set up a "man (renderer) in the middle" attack on the rendering pipeline.
This approach intercepts the SetRenderer call from the TangoCameraPreview base class, and allows one to get access
to the base renderer's OnDraw() method and the GL context. I then add additional methods to this extended renderer that allow reading of the GL buffer.
General approach
1) Extend the TangoCameraPreview class (e.g. in my example ReadableTangoCameraPreview). Override the setRenderer(GLSurfaceView.Renderer renderer), keeping a reference to the base renderer, and replacing the renderer with your own "wrapped" GLSUrface.Renderer renderer that will add methods to render the backbuffer to an image on the device.
2) Create your own GLSurfaceView.Renderer Interface (e.g. my ScreenGrabRenderer class ) that implements all the GLSurfaceView.Renderer methods, passing them on to the base renderer captured in Step 1. Also, add a few new methods to "cue" when you want to grab the image.
3) Implement the ScreenGrabRenderer described in step 2 above.
4) Use a callback interface (my TangoCameraScreengrabCallback) to communicate when an image has been copied
It works pretty well, and allows one to grab the camera bits in an image without rooting the device.
Note: I haven't had the need to closely synchronize my captured images with the point cloud. So I haven't checked the latency. For best results, you may need to invoke the C methods proposed by Mark.
Here's what each of my classes looks like..
///Main Activity Class where bulk of Tango code is
.
.
.
// Create our Preview view and set it as the content of our activity.
mTangoCameraPreview = new ReadableTangoCameraPreview(getActivity());
RelativeLayout preview = (RelativeLayout) view.findViewById(R.id.camera_preview);
preview.addView(mTangoCameraPreview);
.
.
.
//When you want to take a snapshot, call the takeSnapShotMethod()
//(you can make this respond to a button)
mTangoCameraPreview.takeSnapShot();
.
.
.
.
.
//Main Tango Listeners
#Override
public void onFrameAvailable(final int cameraId) {
// Update the UI with TangoPose information
runOnUiThread(new Runnable() {
#Override
public void run() {
if (cameraId == TangoCameraIntrinsics.TANGO_CAMERA_COLOR) {
tangoCameraPreview.onFrameAvailable();
}
}
});
}
ReadableTangoCameraPreview Class
public class ReadableTangoCameraPreview extends TangoCameraPreview implements TangoCameraScreengrabCallback {
Activity mainActivity;
private static final String TAG = ReadableTangoCameraPreview.class.getSimpleName();
//An intercept renderer
ScreenGrabRenderer screenGrabRenderer;
private boolean takeSnapShot = false;
#Override
public void setRenderer(GLSurfaceView.Renderer renderer) {
//Create our "man in the middle"
screenGrabRenderer= new ScreenGrabRenderer(renderer);
//Set it's call back
screenGrabRenderer.setTangoCameraScreengrabCallback(this);
//Tell the TangoCameraPreview class to use this intermediate renderer
super.setRenderer(screenGrabRenderer);
Log.i(TAG,"Intercepted the renderer!!!");
}
/**
* Set a trigger for snapshot. Call this from main activity
* in response to a use input
*/
public void takeSnapShot() {
takeSnapShot = true;
}
#Override
public void onFrameAvailable() {
super.onFrameAvailable();
if(takeSnapShot) {
//screenGrabWithRoot();
screenGrabRenderer.grabNextScreen(0,0,this.getWidth(),this.getHeight());
takeSnapShot = false;
}
}
public ReadableTangoCameraPreview(Activity context) {
super(context);
mainActivity = context;
}
public void newPhoto(String aNewPhotoPath) {
//This gets called when a new photo was grabbed created in the renderer
Log.i(TAG,"New image available at" + aNewPhotoPath);
}
}
ScreenGrabRenderer Interface
(Overloads the TangoCameraPreview default Renderer)
/**
* This is an intermediate class that intercepts all calls to the TangoCameraPreview's
* default renderer.
*
* It simply passes all render calls through to the default renderer.
*
* When required, it can also use the renderer methods to dump a copy of the frame to a bitmap
*
* #author henderso
*
*/
public class ScreenGrabRenderer implements GLSurfaceView.Renderer {
TangoCameraScreengrabCallback mTangoCameraScreengrabCallback;
GLSurfaceView.Renderer tangoCameraRenderer;
private static final String TAG = ScreenGrabRenderer.class.getSimpleName();
private String lastFileName = "unset";
boolean grabNextScreen = false;
int grabX = 0;
int grabY = 0;
int grabWidth = 640;
int grabHeight = 320;
public void setTangoCameraScreengrabCallback(TangoCameraScreengrabCallback aTangoCameraScreengrabCallback) {
mTangoCameraScreengrabCallback = aTangoCameraScreengrabCallback;
}
/**
* Cue the renderer to grab the next screen. This is a signal that will
* be detected inside the onDrawFrame() method
*
* #param b
*/
public void grabNextScreen(int x, int y, int w, int h) {
grabNextScreen = true;
grabX=x;
grabY=y;
grabWidth=w;
grabHeight=h;
}
#Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
tangoCameraRenderer.onSurfaceCreated(gl, config);
}
#Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
tangoCameraRenderer.onSurfaceChanged(gl, width, height);
}
#Override
public void onDrawFrame(GL10 gl) {
tangoCameraRenderer.onDrawFrame(gl);
if(grabNextScreen) {
screenGrab(gl);
grabNextScreen=false;
}
}
/**
*
* Creates a bitmap given a certain dimension and an OpenGL context
*
* This code was lifted from here:
*
* http://stackoverflow.com/questions/5514149/capture-screen-of-glsurfaceview-to-bitmap
*/
private Bitmap createBitmapFromGLSurface(int x, int y, int w, int h, GL10 gl)
throws OutOfMemoryError {
int bitmapBuffer[] = new int[w * h];
int bitmapSource[] = new int[w * h];
IntBuffer intBuffer = IntBuffer.wrap(bitmapBuffer);
intBuffer.position(0);
try {
gl.glReadPixels(x, y, w, h, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, intBuffer);
int offset1, offset2;
for (int i = 0; i < h; i++) {
offset1 = i * w;
offset2 = (h - i - 1) * w;
for (int j = 0; j < w; j++) {
int texturePixel = bitmapBuffer[offset1 + j];
int blue = (texturePixel >> 16) & 0xff;
int red = (texturePixel << 16) & 0x00ff0000;
int pixel = (texturePixel & 0xff00ff00) | red | blue;
bitmapSource[offset2 + j] = pixel;
}
}
} catch (GLException e) {
Log.e(TAG,e.toString());
return null;
}
return Bitmap.createBitmap(bitmapSource, w, h, Bitmap.Config.ARGB_8888);
}
/**
* Writes a copy of the GLSurface backbuffer to storage
*/
private void screenGrab(GL10 gl) {
long fileprefix = System.currentTimeMillis();
String targetPath =Environment.getExternalStorageDirectory() + "/RavenEye/Photos/";
String imageFileName = fileprefix + ".png";
String fullPath = "error";
Bitmap image = createBitmapFromGLSurface(grabX,grabY,grabWidth,grabHeight,gl);
if(!(new File(targetPath)).exists()) {
new File(targetPath).mkdirs();
}
try {
File targetDirectory = new File(targetPath);
File photo=new File(targetDirectory, imageFileName);
FileOutputStream fos=new FileOutputStream(photo.getPath());
image.compress(CompressFormat.PNG, 100, fos);
fos.flush();
fos.close();
fullPath =targetPath + imageFileName;
Log.i(TAG, "Grabbed an image in target path:" + fullPath);
///Notify the outer class(es)
if(mTangoCameraScreengrabCallback != null) {
mTangoCameraScreengrabCallback.newPhoto(fullPath);
} else {
Log.i(TAG, "Callback not set properly..");
}
} catch (FileNotFoundException e) {
Log.e(TAG,"Exception " + e);
e.printStackTrace();
} catch (IOException e) {
Log.e(TAG,"Exception " + e);
e.printStackTrace();
}
lastFileName = fullPath;
}
/**
* Constructor
* #param baseRenderer
*/
public ScreenGrabRenderer(GLSurfaceView.Renderer baseRenderer) {
tangoCameraRenderer = baseRenderer;
}
}
TangoCameraScreengrabCallback Interface
(not required unless you want to pass info back from the screen grab renderer)
/*
* The TangoCameraScreengrabCallback is a generic interface that provides callback mechanism
* to an implementing activity.
*
*/
interface TangoCameraScreengrabCallback {
public void newPhoto(String aNewPhotoPath);
}
I haven't tried on the latest release, but it was the absence of this functionality that drove me to the C API where I could get image data - a recent post, I think on the G+ page, seemed to indicate that the Unity API now returns image data as well - for a company that wants to keep scolding us when we don't use Java, it certainly is an odd lag :-)

Dark Google Tango camera surface when using depth information

Situation: I'm trying to write a Google Tango application in Java that allows the user to see the tango's camera feed with virtual objects on top (i.e. a video see-through augmented reality view) AND uses Tango depth/point cloud information.
Problem: Whenever I try to enable the depth sensor on the Tango, the camera image get's very dark. When I disable the depth sensing, everything is OK. Here are some screen shots:
Google Tango with Depth information enabled:
mConfig.putBoolean(TangoConfig.KEY_BOOLEAN_DEPTH, true);
Same application with Depth information disabled:
mConfig.putBoolean(TangoConfig.KEY_BOOLEAN_DEPTH, false);
Question: How do I get a clean camera image AND enable the Tango's depth information? If pure color is not possible, can a get high contrast B/W? I suspect this is a synchronization issue, and perhaps the surface is drawn after the depth/point cloud algorithm perturbing the image. Or, the camera format is changed to support the depth sensing and is unsuitable for preview.
I'm using the Tango.setSurface technique suggested in this helpful and related post
I'm purposefully NOT using the Android's native camera APIs.
(EDIT: This post is based on Fermat update. Have not confirmed after Gauss update)
My main activity code is posted below. Full project is at this github repo
Thanks in advance!
/*
* Copyright 2014 Google Inc. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.digitalblacksmith.tango_ar_pointcloud;
import com.google.atap.tangoservice.Tango;
import com.google.atap.tangoservice.Tango.OnTangoUpdateListener;
import com.google.atap.tangoservice.TangoConfig;
import com.google.atap.tangoservice.TangoCoordinateFramePair;
import com.google.atap.tangoservice.TangoErrorException;
import com.google.atap.tangoservice.TangoEvent;
import com.google.atap.tangoservice.TangoInvalidException;
import com.google.atap.tangoservice.TangoOutOfDateException;
import com.google.atap.tangoservice.TangoPoseData;
import com.google.atap.tangoservice.TangoXyzIjData;
import android.app.Activity;
import android.content.Intent;
import android.graphics.PixelFormat;
import android.opengl.GLSurfaceView;
import android.os.Bundle;
import android.util.Log;
import android.view.LayoutInflater;
import android.view.MotionEvent;
import android.view.Surface;
import android.view.SurfaceHolder;
import android.view.SurfaceView;
import android.view.View;
import android.view.ViewGroup;
import android.view.ViewGroup.LayoutParams;
import android.widget.Button;
import android.widget.TextView;
import android.widget.Toast;
import java.io.FileInputStream;
import java.io.IOException;
import java.text.DecimalFormat;
import java.util.ArrayList;
/**
*
* Modified Main Activity class from the Original Google Tango SDK Motion Tracking API Sample.
*
* Creates a GLSurfaceView for the OpenGL scene, which displays a cube
* Then adds a SurfaceView for the camera image. The surface is connected
* to the Tango camera. This is necessary if one wants to get point cloud
* data from the Tango AND use the camera for video-see through Augmented Reality.
*
* Lessons learned: Ensure your onPause and onResume actions are handled correctly
* in terms of disconnecting and reconnecting the Tango!! If the Tango is not
* disconnected and reconnected properly, you will get a black background and
* may think the issue is something else.
*
* #author Steve Henderson #stevehenderson
*
*/
public class PointCloudActivity extends Activity implements View.OnClickListener, SurfaceHolder.Callback {
private static final String TAG = PointCloudActivity.class.getSimpleName();
private static final int SECS_TO_MILLISECS = 1000;
private Tango mTango;
private TangoConfig mConfig;
private TextView mDeltaTextView;
private TextView mPoseCountTextView;
private TextView mPoseTextView;
private TextView mQuatTextView;
private TextView mPoseStatusTextView;
private TextView mTangoServiceVersionTextView;
private TextView mApplicationVersionTextView;
private TextView mTangoEventTextView;
private TextView mPointCountTextView;
private TextView mAverageZTextView;
private TextView mFrequencyTextView;
private float mPreviousTimeStamp;
private int mPreviousPoseStatus;
private int count;
private float mDeltaTime;
private Button mMotionResetButton;
private Button mDropBoxButton;
//private boolean mIsAutoRecovery;
//private PCRenderer mOpenGL2Renderer;
private OpenGL2PointCloudRenderer mOpenGL2Renderer;
private DemoRenderer mDemoRenderer;
private GLSurfaceView mGLView;
private SurfaceView surfaceView;
private float mXyIjPreviousTimeStamp;
private float mCurrentTimeStamp;
boolean first_initialized = false;
Surface tangoSurface;
Vector3f lastPosition;
Vector3f dropBoxPosition;
/**
* Set up the activity using OpenGL 20
*/
#SuppressWarnings("deprecation")
private void setUpOpenGL20() {
///////////////////////
//Create GLSurface
///////////////////////
// OpenGL view where all of the graphics are drawn
mGLView = new GLSurfaceView(this);
mGLView.setEGLContextClientVersion(2);
mGLView.setEGLConfigChooser(8,8,8,8,16,0);
SurfaceHolder glSurfaceHolder = mGLView.getHolder();
glSurfaceHolder.setFormat(PixelFormat.TRANSLUCENT);
////////////////////////////////////
// Instantiate the Tango service
///////////////////////////////////
mTango = new Tango(this);
// Create a new Tango Configuration and enable the MotionTrackingActivity API
mConfig = new TangoConfig();
mConfig = mTango.getConfig(TangoConfig.CONFIG_TYPE_CURRENT);
mConfig.putBoolean(TangoConfig.KEY_BOOLEAN_MOTIONTRACKING, true);
/// --->If the next property is false (disabled depth) then image ok <-------
mConfig.putBoolean(TangoConfig.KEY_BOOLEAN_DEPTH, true);
// Configure OpenGL renderer
//mRenderer = new GLClearRenderer();
int maxDepthPoints = mConfig.getInt("max_point_cloud_elements");
mOpenGL2Renderer = new OpenGL2PointCloudRenderer(maxDepthPoints);
mDemoRenderer = mOpenGL2Renderer;
mOpenGL2Renderer.setFirstPersonView();
mGLView.setRenderer(mOpenGL2Renderer);
mGLView.setRenderMode(GLSurfaceView.RENDERMODE_WHEN_DIRTY);
//setContentView(mGLView);
try {
setTangoListeners();
} catch (TangoErrorException e) {
Toast.makeText(getApplicationContext(), R.string.TangoError, Toast.LENGTH_SHORT).show();
} catch (SecurityException e) {
Toast.makeText(getApplicationContext(), R.string.motiontrackingpermission,
Toast.LENGTH_SHORT).show();
}
//////////////////////////
// Create Camera Surface
//////////////////////////
surfaceView = new SurfaceView(this);
SurfaceHolder activitySurfaceHolder = surfaceView.getHolder();
activitySurfaceHolder.addCallback(this);
//mGLView.setZOrderOnTop(true);
setContentView(mGLView);
addContentView( surfaceView, new LayoutParams( LayoutParams.WRAP_CONTENT, LayoutParams.WRAP_CONTENT ) );
/////////////////////////
//Create UI Objects
////////////////////////
LayoutInflater inflater = getLayoutInflater();
View tmpView;
tmpView = inflater.inflate(R.layout.activity_motion_tracking, null);
getWindow().addContentView(tmpView, new ViewGroup.LayoutParams(ViewGroup.LayoutParams.FILL_PARENT,
ViewGroup.LayoutParams.FILL_PARENT));
mApplicationVersionTextView = (TextView) findViewById(R.id.appversion);
mApplicationVersionTextView.setText("OpenGL 2.0");
// Button to reset motion tracking
mMotionResetButton = (Button) findViewById(R.id.resetmotion);
// Set up button click listeners
mMotionResetButton.setOnClickListener(this);
// Button to drop position box (breadcrumb cube)
mDropBoxButton = (Button) findViewById(R.id.dropbox);
// Set up button click listeners
mDropBoxButton.setOnClickListener(this);
//mOpenGL2Renderer.setFirstPersonView();
}
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
Intent intent = getIntent();
setUpOpenGL20();
// Text views for displaying translation and rotation data
mPoseTextView = (TextView) findViewById(R.id.pose);
mQuatTextView = (TextView) findViewById(R.id.quat);
mPoseCountTextView = (TextView) findViewById(R.id.posecount);
mDeltaTextView = (TextView) findViewById(R.id.deltatime);
mTangoEventTextView = (TextView) findViewById(R.id.tangoevent);
mPointCountTextView = (TextView) findViewById(R.id.pointCount);
mAverageZTextView = (TextView) findViewById(R.id.averageZ);
mFrequencyTextView = (TextView) findViewById(R.id.frameDelta);
// Text views for the status of the pose data and Tango library versions
mPoseStatusTextView = (TextView) findViewById(R.id.status);
mTangoServiceVersionTextView = (TextView) findViewById(R.id.version);
// Display the library version for debug purposes
mTangoServiceVersionTextView.setText(mConfig.getString("tango_service_library_version"));
dropBoxPosition = new Vector3f();
lastPosition = new Vector3f();
}
private void motionReset() {
mTango.resetMotionTracking();
}
private void dropBox() {
dropBoxPosition.setTo(lastPosition);
}
#Override
protected void onPause() {
super.onPause();
Log.i(TAG, "OnPause");
try {
mTango.disconnect();
Log.i(TAG,"Pausing..TANGO disconnected");
} catch (TangoErrorException e) {
Toast.makeText(getApplicationContext(), R.string.TangoError, Toast.LENGTH_SHORT).show();
}
}
protected void onResume() {
super.onResume();
Log.i(TAG, "OnResume");
try {
//setTangoListeners();
} catch (TangoErrorException e) {
Log.e(TAG,e.toString());
} catch (SecurityException e) {
Log.e(TAG,e.toString());
}
try {
if(first_initialized)mTango.connect(mConfig);
} catch (TangoOutOfDateException e) {
Log.e(TAG,e.toString());
} catch (TangoErrorException e) {
Log.e(TAG,e.toString());
}
try {
//setUpExtrinsics();
} catch (TangoErrorException e) {
Log.e(TAG,e.toString());
} catch (SecurityException e) {
Log.e(TAG,e.toString());
}
}
#Override
protected void onDestroy() {
super.onDestroy();
}
#Override
public void onClick(View v) {
switch (v.getId()) {
case R.id.resetmotion:
motionReset();
break;
case R.id.dropbox:
dropBox();
break;
default:
Log.w(TAG, "Unknown button click");
return;
}
}
#Override
public boolean onTouchEvent(MotionEvent event) {
return false;
}
/**
* Set up the TangoConfig and the listeners for the Tango service, then begin using the Motion
* Tracking API. This is called in response to the user clicking the 'Start' Button.
*/
private void setTangoListeners() {
// Lock configuration and connect to Tango
// Select coordinate frame pair
final ArrayList<TangoCoordinateFramePair> framePairs =
new ArrayList<TangoCoordinateFramePair>();
framePairs.add(new TangoCoordinateFramePair(
TangoPoseData.COORDINATE_FRAME_START_OF_SERVICE,
TangoPoseData.COORDINATE_FRAME_DEVICE));
// Listen for new Tango data
mTango.connectListener(framePairs, new OnTangoUpdateListener() {
#Override
public void onPoseAvailable(final TangoPoseData pose) {
// Log whenever Motion Tracking enters a n invalid state
if (pose.statusCode == TangoPoseData.POSE_INVALID) {
Log.w(TAG, "Invalid State");
}
if (mPreviousPoseStatus != pose.statusCode) {
count = 0;
}
count++;
mPreviousPoseStatus = pose.statusCode;
mDeltaTime = (float) (pose.timestamp - mPreviousTimeStamp) * SECS_TO_MILLISECS;
mPreviousTimeStamp = (float) pose.timestamp;
// Update the OpenGL renderable objects with the new Tango Pose
// data
float[] translation = pose.getTranslationAsFloats();
mGLView.requestRender();
// Update the UI with TangoPose information
runOnUiThread(new Runnable() {
#Override
public void run() {
DecimalFormat threeDec = new DecimalFormat("0.000");
String translationString = "[" + threeDec.format(pose.translation[0])
+ ", " + threeDec.format(pose.translation[1]) + ", "
+ threeDec.format(pose.translation[2]) + "] ";
String quaternionString = "[" + threeDec.format(pose.rotation[0]) + ", "
+ threeDec.format(pose.rotation[1]) + ", "
+ threeDec.format(pose.rotation[2]) + ", "
+ threeDec.format(pose.rotation[3]) + "] ";
float x = (float) pose.translation[0];
float y = (float) pose.translation[1];
float z = (float) pose.translation[2];
mDemoRenderer.setCameraPosition(x-dropBoxPosition.x, y-dropBoxPosition.y, z-dropBoxPosition.z);
lastPosition.setTo(x, y, z);
float qx = (float) pose.rotation[0];
float qy = (float) pose.rotation[1];
float qz = (float) pose.rotation[2];
float qw = (float) pose.rotation[3];
mDemoRenderer.setCameraAngles(qx, qy, qz, qw);
// Display pose data on screen in TextViews
//Log.i(TAG,translationString);
mPoseTextView.setText(translationString);
mQuatTextView.setText(quaternionString);
mPoseCountTextView.setText(Integer.toString(count));
mDeltaTextView.setText(threeDec.format(mDeltaTime));
if (pose.statusCode == TangoPoseData.POSE_VALID) {
mPoseStatusTextView.setText(R.string.pose_valid);
} else if (pose.statusCode == TangoPoseData.POSE_INVALID) {
mPoseStatusTextView.setText(R.string.pose_invalid);
} else if (pose.statusCode == TangoPoseData.POSE_INITIALIZING) {
mPoseStatusTextView.setText(R.string.pose_initializing);
} else if (pose.statusCode == TangoPoseData.POSE_UNKNOWN) {
mPoseStatusTextView.setText(R.string.pose_unknown);
}
}
});
}
#Override
public void onXyzIjAvailable(final TangoXyzIjData xyzIj) {
//Log.i(TAG,"xyzijAvailable!!!!!!!!");
mCurrentTimeStamp = (float) xyzIj.timestamp;
final float frameDelta = (mCurrentTimeStamp - mXyIjPreviousTimeStamp)
* SECS_TO_MILLISECS;
mXyIjPreviousTimeStamp = mCurrentTimeStamp;
byte[] buffer = new byte[xyzIj.xyzCount * 3 * 4];
//////mGLView.requestRender();
FileInputStream fileStream = new FileInputStream(
xyzIj.xyzParcelFileDescriptor.getFileDescriptor());
try {
fileStream.read(buffer,
xyzIj.xyzParcelFileDescriptorOffset, buffer.length);
fileStream.close();
} catch (IOException e) {
e.printStackTrace();
}
try {
TangoPoseData pointCloudPose = mTango.getPoseAtTime(
mCurrentTimeStamp, framePairs.get(0));
mOpenGL2Renderer.getPointCloud().UpdatePoints(buffer,
xyzIj.xyzCount);
mOpenGL2Renderer.getModelMatCalculator()
.updatePointCloudModelMatrix(
pointCloudPose.getTranslationAsFloats(),
pointCloudPose.getRotationAsFloats());
mOpenGL2Renderer.getPointCloud().setModelMatrix(
mOpenGL2Renderer.getModelMatCalculator()
.getPointCloudModelMatrixCopy());
} catch (TangoErrorException e) {
Toast.makeText(getApplicationContext(),
R.string.TangoError, Toast.LENGTH_SHORT).show();
} catch (TangoInvalidException e) {
Toast.makeText(getApplicationContext(),
R.string.TangoError, Toast.LENGTH_SHORT).show();
}
// Must run UI changes on the UI thread. Running in the Tango
// service thread
// will result in an error.
runOnUiThread(new Runnable() {
DecimalFormat threeDec = new DecimalFormat("0.000");
#Override
public void run() {
// Display number of points in the point cloud
mPointCountTextView.setText(Integer
.toString(xyzIj.xyzCount));
mFrequencyTextView.setText(""
+ threeDec.format(frameDelta));
mAverageZTextView.setText(""
+ threeDec.format(mOpenGL2Renderer.getPointCloud()
.getAverageZ()));
}
});
}
#Override
public void onTangoEvent(final TangoEvent event) {
runOnUiThread(new Runnable() {
#Override
public void run() {
mTangoEventTextView.setText(event.eventKey + ": " + event.eventValue);
}
});
}
});
}
private void setUpExtrinsics() {
// Get device to imu matrix.
TangoPoseData device2IMUPose = new TangoPoseData();
TangoCoordinateFramePair framePair = new TangoCoordinateFramePair();
framePair.baseFrame = TangoPoseData.COORDINATE_FRAME_IMU;
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_DEVICE;
device2IMUPose = mTango.getPoseAtTime(0.0, framePair);
// mRenderer.getModelMatCalculator().SetDevice2IMUMatrix(
// device2IMUPose.getTranslationAsFloats(), device2IMUPose.getRotationAsFloats());
// Get color camera to imu matrix.
TangoPoseData color2IMUPose = new TangoPoseData();
framePair.baseFrame = TangoPoseData.COORDINATE_FRAME_IMU;
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_CAMERA_COLOR;
color2IMUPose = mTango.getPoseAtTime(0.0, framePair);
// mRenderer.getModelMatCalculator().SetColorCamera2IMUMatrix(
// color2IMUPose.getTranslationAsFloats(), color2IMUPose.getRotationAsFloats());
}
#Override
public void surfaceCreated(SurfaceHolder holder) {
Surface surface = holder.getSurface();
if (surface.isValid()) {
mTango.connectSurface(0, surface);
first_initialized=true;
mTango.connect(mConfig);
}
}
#Override
public void surfaceChanged(SurfaceHolder holder, int format, int width,
int height) {
// TODO Auto-generated method stub
}
#Override
public void surfaceDestroyed(SurfaceHolder holder) {
mTango.disconnectSurface(0);
}
}
I discovered a solution of sorts, with the help of my friends at CGUI.
In my post above, you will note that the depth camera enabled image has an underexposure error. It seems that the Tango is doing some auto-exposure on the camera image.
When I tried it during the day time, with good natural light and some added flood lights, I received better results:
Depth enabled:
Depth disabled:
So, one possible workaround/consideration when using color and depth is to carefully manage the light in the environment..This makes sense given some of the calibration routines I've seen in the Tango demo apps that call for "daylight"
UPDATE You can also select the B/W fisheye camera which might be better for low-light situations if you don't mind the lack of color and distortion:
mTango.connectSurface(2, surface); //0-->color cam; 2--> B/W fisheye

Resources