OpenGL ES context loss with LibGDX - opengl-es

I am working on an app using LibGDX, and I am tackling some of the various issues that result from context loss when the user leaves the app and returns to it. In general, this is not much of a problem, but throughout the use of the app I occasionally build custom textures in the app itself to use in a couple different areas.
These textures absolutely must be preserved for when the user comes back in to the application, but I am not sure what the best approach for doing so is. So, simply, what is the proper way to preserve a texture that cannot simply be loaded back in when it is needed?

Related

Augmented Reality: superimpose texture over real objects. Is it possible?

Please bear with me as English is not my first language but I will try to explain my question as clearly as I can.
Let's say I have made a 3D scan of a real table, exported it into a ThreeJS scene and created some texture around it.
I would like to view the real object through the camera of a phone, tablet (or any device you might find suitable for the task) and be able to superimpose the texture on the object in the camera view.
The AR app that I intend to develop should be able to detect the real object, the view angle, distance and be able resize the texture on the fly before being applied to the object on camera.
The question I am asking is: it possible to achieve this? If yes, I would like to know which technologies and development tools should I consider and explore. Have you done something similar in the past? I am trying to avoid to go on a path that will end up in a dead end and tears.
Thank you very much for your help.
Three.js is for a web application, as far as I know thereĀ“s not any way to mix a tracking like OpenCV script with Three.js
If you want a Mobile SDK capable of what you say, I suggest Vuforia + Unity.
The choice of technologies depends on the platform where you intend to run this application.
If we're talking mobile applications then you should check out ARKit (iOS) or ARCore (Android), and consider developing a native application instead of relying on JavaScript. The precision and performance of these native libraries is far better than what you'd achieve on your own or in a JavaScript application. This is not something Three.js won't do for you.

If I use D3.js to do moderate complicated user interaction related data visulization, should I choose React or Angular2?

All:
I got some decision pain here, as beginner, say I need to build a data visualization Application with D3, I kinda wondering which library should I use to handler those chart drawing( it is a little bit complicated, with a lot of user action like add/remove/moving/style/animation etcs), currently this application is implement by Angular1 and not that modularized, also when data grows, the chart drawing is lagging. That is why I am thinking of switching.
Right now, I do not have much experience about either Angular2 or React, but I hear that they are both very performanced, so in terms of code complexity and performance, could anyone give me some concrete suggestion?
Thanks
We currently have a large javascript application that uses ReactJS and D3 charts.
It's pretty performant although we have the lucky advantage of not having to support mobile as it's an enterprise application.
One thing I would say is be aware of Reacts shouldComponentUpdate method and use ImmutableJS for your containers. We've also implemented flux and a WebSocket API, but if you're new to React I'd just try and get a really simple d3 chart like a sankey working and go from there.
When you actually get to the leaf node that's a d3 chart, you can then convert Immutable data types to native objects but only to pass it to d3.
It works completely fine, I wouldn't however manage any sort of state in d3, be careful not to fall into that trap. You want to attach events like click etc. to a d3 node, but your React components should be handling those events and not d3.
Hope that helps.
I have also used d3 and react. They played together perfectly fine for me, especially since d3 is also on npm and fit nicely into the bundler required to compile jsx used by react. I don't see a downside to it. As a matter of fact, http://avocadojesus.com is built entirely in react and d3js, and uses the web audio api to stream the data into different d3 data visualizations (so dont view in safari unless you want to be bored lol).
I have used angular1 on projects and been very happy with most aspects of it. I think the only place it fell short with me was on large ng-repeat mappings. If your front-end is architected in consideration of your users, it's very unlikely you should run into large ng-repeat problems (But I would probably avoid for any infinite scroll situations, as the ramifications of leaving all those elements in the dom can cause major problems in angular's dom rendering cycles).
I think Angular has been a bit of a disappointment to me only in that it seems to be hanging in the balance for so long. Angular2 is completely rewritten in many ways, so learning angular1 right now might cause more growing pains down the road for you, as you would obviously want to upgrade. Learning Angular2 was frustrating last time i tried to do it, as they hadn't released any documentation and were still advising extreme caution against using it for anything non-experimental. Now they are in beta, and it seems pretty safe to learn. If you are feeling brave, or if you are just experimenting and think fondly of angular1 then angular2 is a good choice for you. The right tool for you should be one that you really love to work with, and both projects have excellent support docs and community so its safe either way IMHO
If it were up to me, I would stick with react. There is a large community built around it, and the framework is very simple. The simplicity of it is what really allows it to shine. It trains you in the proper way to collect, distribute, and update data to affect the state of all your components. If you are brave and skilled, i would recommend doing a deep dive into the flux framework as well, as it is fairly complicated, but allows you to see how well such a simple component architecture can do when informed by a sophisticated set of data stores and actions. Conceptually, it was very enlightening for me, and it informs all the code that i write now. My rails has even been getting fluxy lately haha >.<

interactive Augmented Reality 3D drawer

I'm planning on doing an interactive AR application that will use a laser sensor (for distances), GPS technology to get a location, and then use compass/gyroscope for tracking 6DOF viewfinder
movements. The user can choose from a number of ready-made 3D-models, and should be able to place them by selecting the desired location on the screen.
My target platform will be a 8"-handheld-device, running on windows8.
Any hints what would be the best AR-SDK or 3D-viewer to work with?
thanks in advance!
There are quite a few 3D viewers that are working in the browsers. But most recently and most notably: va3C viewer
It is webgl based app and doesnt require a server, so if your handheld device supports webgl, then you are good to go, however, whether it works on IE or not is questionable ;).
Although based on my experience and your usecase, I believe client side JS libraries do not provide enough access to the device's hardware. So you might have to serve the information like GPS, Gyroscope, from the server side, then gather this on the client using something like socket.io and then mash it up alongside the geometry.
I am trying to do something similar, although havent quite done it yet. Will keep you posted.
Another approach I am exploring is X3DOM, which gives the ability to write 3D data like XML alongside HTML, which is quite declarative and simple to pickup. X3DOM derives from X3D.
Tell me if you need more info.
Also, worth exploring for its motion abilities, is Robot Studio, which is a desktop app with SDK.

server-side fallback rendering

Is there any way to have three.js running server-side on a headless server (standalone server, Amazon AWS or similar)?
Currently I fall back to canvas rendering (wireframe only for performance reasons) when user's browser does not support WebGL. This is good enough for realtime interaction, but for the app to make sense, users would really need to somehow be able to see a properly rendered version with lights, shadows, post processing etc. even if it comes with great latency.
So... would it be possible to create a server-side service with functional three.js instance? The client would still use tree.js canvas wireframe rendering, but after say... a second of inactivity, it would request via AJAX a full render from the server-side service, and overlay it simply as an image.
Are there currently any applications, libraries or anything that would allow such a thing (functional javascript+webgl+three.js on a headless, preferably linux server, and GPU-less at that)?
PhantomJS comes to mind, but apparently it does not yet support WebGL: http://code.google.com/p/phantomjs/issues/detail?id=273
Or any alternative approaches to the problem? Going the route of programmatically controlling a full desktop machine with a GPU and standard chrome/firefox instance feels possible, while fragile, and I really really wouldn't want to go there if there are any software-only solutions.
In its QA infrastructure, Google can run Chromium testing using Mesa (see issue 97675, via the switch --use-gl=osmesa). The software rasterizer in the latest edition of Mesa is pretty advanced, involving the use of LLVM to convert the shaders and emulate the execution on the CPU. Your first adventure could be building Mesa, building Chromium, and then try to tie them together.
As a side note, this is also what I plan (in the near future) for PhantomJS itself, in particular since Qt is also moving in that direction, i.e. using Mesa/LLVMpipe instead of its own raster engine only. The numbers actually look good. Even better, for an offline, non-animated single-shot capture, the performance would be more than satisfactory.
Some inputs in this thread : https://github.com/mrdoob/three.js/issues/2182
In particular this demo shows how to generate some images on server side using nodejs.
Thanks,
Nico
Links below will not resolve your problem with AWS but will give you a hint.
I am working on the application with a similar architecture and came across with these examples:
Multiplayer game with realtime socket.io
My original question on similar architecture

Is single context the way to go for Appcelerator apps?

I am trying to decide what application paradigm to use for my iOS app build with Appcelerator. Many people talk about the Tweetanium way as the best way, i.e. single context.
I think I am going to use that but I have a few questions about it.
Since I include all "windows" on the first page. Does that mean that it will have to load all windows in the application at app start?
Will this paradigm really be very fast and memory conservative compared to "normal" way of for example the Kitchensink?
What is the downside of using Tweetaniums way of doing things?
Is it suitable for complex apps?
Thankful for all input!
Short version: Yes :)
Longer version:
Multi-context apps (like the Kitchen Sink) are also fine generally speaking, but you run into the following two problems with larger apps:
1.) Sharing data between windows/contexts within an app
2.) Unsure when the code for a given window has been run
You can also (potentially) maintain a pointer to a UI object created in one context after the window associated with that context is closed, which under some circumstances can lead your app to leak memory. Single context is easier and will ultimately get you into less trouble. Also, if your app is large, be sure to only load scripts as you need them, and not all up front.

Resources