I have created an augmented reality application using Monodroid and it works fine on a technical basis. However, the graphics I used were drawn on a canvas and are really too slow.
The application is a simple heads-up compass and speed display a-la luke-skywalkers binoculars.
I am trying to get a camera preview going with an openGL translucent/transparent overlay and yes, I have read whats available but its all pure Android SDK / Java.
Does anyone know of a method of getting this effect in C# and Monodroid possibly using the AndroidGameView? Whatever I do I can see one or the other but never both at the same time.
Unhelpful jerks are a pleasure to work with.
http://bobpowelldotnet.blogspot.fr/2012/10/monodroid-camera-preview-as-opengl.html
Related
I'm trying to use an ARView in a macOS-only project. I can load a scene (tested with a Reality file from Reality Composer), and it renders fine.
But how do I control the camera with the mouse?
An example of this is Reality Composer, Reality Converter, and previewing a Reality file in Xcode where you can drag anywhere on the view and the camera pans, rotates, etc. In SceneKit the equivalent is allowsCameraControl
There's no cameraMode on the macOS ARView, probably because it only supports nonAR anyway.
I tried adding a PerspectiveCamera hoping it would unlock interactivity, but no luck.
I guess I could just implement all the gestures myself, but that's a lot of work, and Apple seems to be using a standard way to interact with the scene with a mouse - and also a standard grid, which I'd love to use, too.
I'm using macOS 12 beta 1, but it shouldn't make a big difference since ARView requires macOS 10.15
Neither in macOS nor in iOS, RealityKit has a property similar to SceneKit's .allowsCameraControl. However, using this code as a starting point, you can create your own camera control in RealityKit.
This post can also be helpful for you.
Is there a way to get OpenGL context from native code? Lets say, I need to draw something from my obj-c code. Anything, some object, complex bezier curve etc. I know that I need enterprise account. So I'm asking just about the OpenGL context. How it will look like and how to do it?
Corona Enterprise has APIs that allow you to interact with the Corona "Environment" from the native code. But I don't think it is possible to add something inside the Corona OpenGL (i.e, possible may even be, but Corona doesn't make that easy for you).
Usually when you draw or add something from the native code, like an image, you add that image to overlay view that is above the Corona OpenGL view. (In fact, that is why when using Corona Pro, all native objects always appear above of your corona elements)
I'm looking at the Sony SmartEyeGlass and it seems like the only way to interact with the "augmented reality layer" (what's drawn on the glasses) is through the proprietary Sony APIs.
I'm wondering whether there is a way to let OpenGL ES manage this layer as a GLSurfaceView ?
Or is there an alternative way to do 3D rendering on the glasses?
At the moment, there isn't a special API to connect with OpenGL. The way how to achieve OpenGL rendering with SmartEyeglass is to render your content directly to a Bitmap and show it using SmartEyeglassControlUtils.showBitmap(Bitmap bitmap)
Here you will find a solution, how to render OpenGL to a Bitmap:
Run Android OpenGL in Background as Rendering Resource for App?
Please let us know in comment, for what kind of application you need this OpenGL feature.
Good luck.
I'm trying to figure out how to implement pinch-to-zoom functionality. My problem is I'm not sure how to do it algorithmically.
I have the pinch positions of both fingers and the amount they've moved since the last frame. At first I tried making the pinch amount the delta of the distance between the two fingers however every way I've done it around this concept has been unyieldly.
Even if I manage to get the pinching working semi-decently I still have the problem of the zoom direction and how to make the image zoom in on the center of the pinch area...
Is there a proper way of implementing such functionality?
I also recommend reading this, a really well implemented "gold standard" pinch:
http://adtsai.blogspot.com/2010/09/pinch-zooming-using-xna4-on-wp7-getting.html
It also makes reference to a pinch to zoom add-in so you can test pinching on the emulator with just a mouse.
What you want to do is use the built-in gesture API (specifically Pinch and PinchComplete). That way, you can take advantage of the heuristics that the xna/wp7 team has already built in. Your app will feel "more native" this way because it will react like the rest of the OS in reaction to a pinch gesture.
Nick Gravelyn has a great intro to the gesture API here:
http://blogs.msdn.com/b/nicgrave/archive/2010/07/12/touch-gestures-on-windows-phone-7.aspx
I found few links of 3rd-party solution...
1) Dual-Touch SDK for Resistive Screens V1.0 Beta, Rotation Alpha
2) SciLor's HD2 / Leo Multitouch .NET CF DLL
I tried using Dual-Touch SDKs which were working fine for Resistive screen mobiles but not for Capacitive Screen mobiles.
Is there a way to get textboxes, labels and other wpf controls in xna that supports margins, etc that flexes for window size?
You might give CeGui a shot.
If your game needs advanced GUI capabilities, CeGui# might just hit the nail on the head for you. Marketese aside, this is a seriously good GUI library with Buttons, ListBoxes, Scrollbars, ProgressBars, Sliders, ComboBoxes and more.
To access the Xna version you'll need to check out the latest copy from the project's SVN and load up CeGui-XNA.sln.
There are other options listed in this thread, but I have no idea how well any of the others work (and it probably isn't a comprehensive list anymore).
The official GUI systems FAQ thread in the XNA Forum:
What GUI systems are there for the XNA framework?
CEGUI# is powerful, but it doesn't support the Xbox 360 (eg. its design doesn't include responding to game pad input) - a major overhaul would be required to refit it to be usable with something else than mouse and keyboard.
Not exactly what you're looking for, but here is an example of getting winforms GUI elements mixed in with XNA 3d content:
http://creators.xna.com/en-US/sample/winforms_series1
Check out SQUID: http://www.ionstar.org/
It's a really clean, fast, and engine independent UI system. I've worked with it extensively and really enjoy using it. The download includes sample code for XNA 3.1, Truevision3D, and SlimDX.
It is possible to embed an XNA game in a WPF form (google: XNA in WPF) if you target only Windows system. You will then have access to all the controls available in XPF for your 2D GUI.
If you also target Xbox 360 or Zune; you must make your own GUI library :(