Gles2WatchFaceService deprecated? What now? - wear-os

For my Wear OS watch face project, I am using Gles2WatchFaceService so the watch face has smooth OpenGL animation when needed. I just updated in build.gradle the following dependency:
implementation 'com.google.android.support:wearable:2.9.0'
The old version was 2.8.1. Now, with 2.9.0, I see this in my main project file when hovering with the mouse over Gles2WatchFaceService:
Seems like Gles2WatchFaceService and WatchFaceService are deprecated. If so, what can I use instead? I found this warning in https://developer.android.com/reference/android/support/wearable/watchface/WatchFaceService:
This class is deprecated.
Use androidx.wear.watchface.WatchFaceService from the Jetpack Wear Watch Face libraries instead.
But what about Gles2WatchFaceService? Any help appreciated.

See https://developer.android.com/reference/androidx/wear/watchface/Renderer.GlesRenderer
Watch faces that require GLES20 rendering should extend their Renderer from this class.
A GlesRenderer is expected to be constructed on the background thread associated with WatchFaceService.getBackgroundThreadHandler inside a call to WatchFaceService.createWatchFace. All rendering is be done on the UiThread. There is a memory barrier between construction and rendering so no special threading primitives are required.
Two linked EGLContexts are created eglBackgroundThreadContext and eglUiThreadContext which are associated with background and UiThread respectively and are shared by all instances of the renderer. OpenGL objects created on (e.g. shaders and textures) can be used on the other.
Because there can be more than once instance when editing, to save memory its recommended to override createSharedAssets and load all static data (e.g. models, textures, shaders, etc...). OpenGL objects created inside createSharedAssets will be available to all instances of the watch face on both threads.
If you need to make any OpenGl calls outside of render, onBackgroundThreadGlContextCreated or onUiThreadGlSurfaceCreated then you must use either runUiThreadGlCommands or runBackgroundThreadGlCommands to execute a Runnable inside of the corresponding context. Access to the GL contexts this way is necessary because GL contexts are not shared between renderers and there can be multiple watch face instances existing concurrently (e.g. headless and interactive, potentially from different watch faces if an APK contains more than one WatchFaceService). In addition most drivers do not support concurrent access.
In Java it may be easier to extend androidx.wear.watchface.ListenableGlesRenderer instead.

According to release note: https://developer.android.com/wear/releases
Version 2.9.0 of the Wearable Support Library deprecates all remaining classes. The Wear OS Jetpack libraries should now be used instead.

Related

Is Simple DirectMedia Layer's SDL_GL_GetProcAddress working with OpenGL ES 2.0 for embedded systems

I'm building an application with OpenGL ES 2.0 and SDL2 for Android. Is SDL_GL_GetProcAddress working with OpenGL ES 2.0 on Android? Also i know OpenGL ES 2.0 is a subset of OpenGL, so with this method can it run on desktop systems too?
From a quick browse of the SDL repository it should be.
SDL_video.c defines the implementation of SDL_GL_GetProcAddress simply to check that you've started OpenGL and then to call _this->GL_GetProcAddress, where _this is a global instance of the video driver.
SDL_androidvideo.c sets its GL_GetProcAddress to be Android_GLES_GetProcAddress, which is a preprocessor substitution for SDL_EGL_GetProcAddress.
So, so far: if you call SDL_GL_GetProcAddress, you'll get through to SDL_EGL_GetProcAddress.
SDL_egl.c implements SDL_EGL_GetProcAddress but declines to call eglGetProcAddress on Android. This looks like it's probably an error — the reason given is this bug but the status for that bug switched to 'Released' in June 2013, which I believe means that this has been fixed in Android for more than three years.
That aside, the fallback is to use SDL_LoadFunction, first with the direct function name, then with it proceeded by an underscore provided it's short enough to fit into the statically-declared buffer. Which this one is.
(so, caveat: SDL_GL_GetProcAddress is definitely not thread-safe, even if you've taken appropriate share group steps to use multiple GL contexts, but if you're writing an SDL program you probably don't care)
Android should be using the dlopen version of SDL_sysloadso so it looks like SDL_LoadFunction is implemented directly as a call to dlsym. Which has no issues that I'm aware of under Android.
So, in summary: yes, that call should work. It'll use the platform-specific dynamic library loader rather than the EGL call though it probably doesn't need to, but that's just an implementation detail.

Framework vs. Plug-in

I have an app that draws shapes on the screen, I have another app that uses the same enviroment (frameworks) of the first app, but draws a special kind of shapes. I might write several apps that draw special shapes. What is the best way to link them to one app via Xcode?
Create each one as a framework? Or as a plug-in bundle?
A framework is linked to your app code, whereas with a plug-in, you need to fetch function pointers. The plug-in design would make sense if the plug-in, or some of its functions, might be missing. But if each of the functions is required by each of your apps, using a framework would be simpler.

CGL vs OpenGL on Mac

I am trying to figure out the relationship between CGL and OpenGL on Mac platform.
More specifically about the context. Do they share context? If yes, how? Please give me a link to some related examples.
If no, then are there two contexts working in Core Animation applications which make use of OpenGL?
I am very confused by the use of OpenGL by Mac. Can somebody clarify?
CGL sets up device specific contexts suitable for OpenGL to render to. Compare to wgl and xgl on Windows and X respectively. CGL understands how to query the graphics hardware for its pixel format, and then how to set up and configure a context (e.g. double-buffered or single-buffered, what resolution depth, stencil, accumulation buffer, etc). But it doesn't provide functions to draw in that context. Once you have created the context with CGL, you make it current, and then you can call OpenGL to render in that context.
In Core Graphics (do not confuse it with CGL), both context initialization and drawing into the context are handled by the same framework. But because OpenGL is an open standard and designed to be cross-platform, the rendering functionality and the device context functionality have been abstracted into separate frameworks.
CGL is the low-level interface to OpenGL on a Mac. You probably don't want to be using it if you are writing an OpenGL Mac app. I am currently in the process of creating a intuitive OpenGL Mac application template for XCode 4, but in the mean time you can look at https://github.com/mk12/Pong-Ultimate, a pong clone I made using OpenGL. It uses NSOpenGL, a higher-level Cocoa interface to OpenGL.
You may also find the Apple docs helpful: http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Conceptual/OpenGL-MacProgGuide/opengl_intro/opengl_intro.html.

hardware acceleration / performance and linkage of different macosx graphics apis, frameworks and layers

the more i read about the different type of views/context/rendering backends, the more i get confused.
regarding to http://en.wikipedia.org/wiki/Quartz_%28graphics_layer%29
MacOSX offers Quartz (Extreme) as a render-backend which itself is part of Core Graphics.
in the Apple docs and in some books too they say that in any case somehow you use OpenGL (obviously since this operating system uses OpenGL to render all its UI).
i currently have an application that should capture real-time video from a camera (via QTKit which is based on Quicktime but is Cocoa) and i would like to further process the frames (via Core Image, GLSL shaders, etc.).
so far so good. now my question is - does it matter performancewise if you
a) draw the captured frame via Quartz and implicitely via OpenGL or
b) if you setup an OpenGL context and a DisplayLink and draw the buffered image explicitely via OpenGL?
what would be the advantages or disadvantages of going either way?
i've looked at the different examples (especially CoreImage101 and CoreVideo101) and documents from apple's developer pages but i can't see why they go (or have to go) that way?!?
and i really don't get where Core Video and Core Animation come into play?
does going way b) automatically mean i use Core Video? and with which way can i use Core Animation?
additional info:
http://developer.apple.com/leopard/overview/graphicsandmedia.html
http://theocacao.com/document.page/306
http://lists.apple.com/archives/quartz-dev/2007/Jun/msg00059.html
p.s.: btw, i am on Leopard, so no QuicktimeX confusion yet :)
Generally speaking OpenGL just gives you more flexibility than the higher level APIs. If the higher level APIs do not offer a feature you need then it is very likely that you will need to drop down to the OpenGL layer.
If they do offer everything you need then you should comparable speed. Perhaps a small (almost negligible) degradation given the Objective-C overhead.

Automated testing for OpenGL application

I have a Java Application that uses JOGL to provide a large part of the GUI.
Is there any tool which you know of, or have used which can automate the testing of OpenGL applications (or more specificly those using JOGL)
Just to update: The tool can run on either linux or windows.
I have written unit-tests for C++ (Qt on Linux) & OpenGL before. I don't know any reason it shouldn't work for Java too.
The things which worked for me are:
Abstract your OpenGL context provider so the rest of your code is independent of it. In my case the main app used Qt's QGLWidget, but the unittests used a pbuffer-based one which I could create with no windowing infrastructure at all (other than a designated X11 DISPLAY). Later I added an "offscreen Mesa" (pure software OpenGL implementation) so they'd even work on a headless build machine with no GPU at all.
Keep your OpenGL code independent of your GUI code. In my case the OpenGL "rendering engine" didn't know anything about Qt classes (e.g mouse events). Define your own testable API which isn't tied to any specific GUI concepts, and write tests for it.
In the unittests, read the contents back from the framebuffer using glReadPixels and either hit them with some assertions about which pixels should be particular values, or go down the regression-testing route and compare the framebuffer capture with a stored image you know is good (either from manual verification, or because it's produced from some other reference model).
Allow a bit of fuzziness in any image regression testing; most OpenGL implementations produce slightly different output.
(I didn't do this, but...) Ideally you want to be able to test the GUI layer by asserting that it makes the expected sequence of calls to your rendering engine in response to GUI activity. If it does that, and you're confident of your render layer because of the above tests... well, no need to actually do the rendering. So create a suitable mock object of your rendering layer for use when GUI testing (I'd imagine tests like "mouse drag from here to there results in a call to the rendering layer to set a particular transform matrix"... stuff like that).
You can test your JOGL based application using Sikuli which performs UI automation via image recognition techonolgy on screen-shots.
I am currently using Sikuli to functional-test a Java app that is predominantly based on the NASA Worldwind Java SDK (which is based on JOGL). Using the Sikuli Java API my test suite can recognise icons within the OpenGL canvas, click on them and also drag them around. Sikuli can also recognise and extract text from the canvas via OCR, however the performance of this seems to be a little hit-and-miss (depending on language, font, size and background colours behind the text).
I have done a lot of automated UI-Testing using other tools that work by introspecting the windowing toolkit (e.g. Swing, SWT, native Windows) and found that Sikuli runs much slower than those, however that is understandable given the amount of image-processing it needs to do behind the scenes. Also note that Sikuli currently requires your application to run in a Window (not in full screen mode).
Sikuli runs on both Windows and Linux. I'd recommend you try it out. I couldn't find any other tool capable of doing that level of functional-testing of an OpenGL based application.
I've been thinking of using a picture-diff tool such as PDiff to test OpenGL code, by taking snapshots, saving them to disk and comparing with previous regression output. That way, really bad stuff (missing textures) pop up but humanly unnoticable things (such as the above mentioned small diffs between implementation) go through fine.
Also, for automating user interaction, either the GUI classes should be sufficiently open for you to send events or call 'clicked' on a button, or you have to inject OS events manually to your apps. This is possible, but much more troublesome. Might be easier to open up the GUI layer, if it's Open Source.

Resources