Please bear with me as English is not my first language but I will try to explain my question as clearly as I can.
Let's say I have made a 3D scan of a real table, exported it into a ThreeJS scene and created some texture around it.
I would like to view the real object through the camera of a phone, tablet (or any device you might find suitable for the task) and be able to superimpose the texture on the object in the camera view.
The AR app that I intend to develop should be able to detect the real object, the view angle, distance and be able resize the texture on the fly before being applied to the object on camera.
The question I am asking is: it possible to achieve this? If yes, I would like to know which technologies and development tools should I consider and explore. Have you done something similar in the past? I am trying to avoid to go on a path that will end up in a dead end and tears.
Thank you very much for your help.
Three.js is for a web application, as far as I know thereĀ“s not any way to mix a tracking like OpenCV script with Three.js
If you want a Mobile SDK capable of what you say, I suggest Vuforia + Unity.
The choice of technologies depends on the platform where you intend to run this application.
If we're talking mobile applications then you should check out ARKit (iOS) or ARCore (Android), and consider developing a native application instead of relying on JavaScript. The precision and performance of these native libraries is far better than what you'd achieve on your own or in a JavaScript application. This is not something Three.js won't do for you.
Related
I'm planning on doing an interactive AR application that will use a laser sensor (for distances), GPS technology to get a location, and then use compass/gyroscope for tracking 6DOF viewfinder
movements. The user can choose from a number of ready-made 3D-models, and should be able to place them by selecting the desired location on the screen.
My target platform will be a 8"-handheld-device, running on windows8.
Any hints what would be the best AR-SDK or 3D-viewer to work with?
thanks in advance!
There are quite a few 3D viewers that are working in the browsers. But most recently and most notably: va3C viewer
It is webgl based app and doesnt require a server, so if your handheld device supports webgl, then you are good to go, however, whether it works on IE or not is questionable ;).
Although based on my experience and your usecase, I believe client side JS libraries do not provide enough access to the device's hardware. So you might have to serve the information like GPS, Gyroscope, from the server side, then gather this on the client using something like socket.io and then mash it up alongside the geometry.
I am trying to do something similar, although havent quite done it yet. Will keep you posted.
Another approach I am exploring is X3DOM, which gives the ability to write 3D data like XML alongside HTML, which is quite declarative and simple to pickup. X3DOM derives from X3D.
Tell me if you need more info.
Also, worth exploring for its motion abilities, is Robot Studio, which is a desktop app with SDK.
just wondering around google and could not find a really convincing method (yet) to make an animation web page using my own vector image. I really love to draw and would like my self to go deeper and making an interactive web page (flashless) using my own vector drawing.
I am already have a good understanding about javascript, html and css so if any of you could suggest a mature and well documented library to use as a base for my project that would be awesome.
Maybe a little pro cons insight using javascript vs css for web animation will also be good for other newbie like me in the future as a consideration before I dip my toe to the sea.
Fabric.js seems to encompass many modern features of browsers using the canvas. Check it out, I know it supports SVG.
http://fabricjs.com/
I am working on an app using LibGDX, and I am tackling some of the various issues that result from context loss when the user leaves the app and returns to it. In general, this is not much of a problem, but throughout the use of the app I occasionally build custom textures in the app itself to use in a couple different areas.
These textures absolutely must be preserved for when the user comes back in to the application, but I am not sure what the best approach for doing so is. So, simply, what is the proper way to preserve a texture that cannot simply be loaded back in when it is needed?
After readying through the HIG, I see that it is best (and recommended) to produce both regular and high quality images. My question is this: what is the standard practice for loading these images per device?
Do I check at start up for the device type and then do some sort of switch statement when I load images dynamically? Thanks in advance.
Geo...
If I am reading your question right, then there are different ways to load and blit images to the screen. You can use OpenGL ES which I am told would be the fastest way to load/blit images. You could use Quartz, which I am told would work but would be a pain to do (with all the coding). Alternatively you can use the normal User Interface framework to load and blit them, which would be the easiest I assume. OpenGL ES would take a while to learn and is mostly for 3D programming, and Quartz would also take a little while to learn, although maybe not as much as the OpenGL ES would take to learn though. The UI framework would be real easy to learn I think, and if you go through the apple iOS documentation, you can read up on OpenGL ES, Quartz, and the UI framework as well. Just google "apple dev", and it should be the first thing that pops up. Good luck.
Is it possible to develop an 3D application in .Net (XNA or WPF) that would take advantage of Windows 7's multi touch support.
Is it possible to do so ?
Where is the best place to start ?
First: The nature of what you display is independent of whether your application supports multitouch or not.
Second: I don't believe there's a great value in having 3D interfaces whether using multitouch or not: the thing you touch is probably plain (a monitor or something the like). Better go with 2.5D (use shadows to give the whole thing a pseudo 3D look, stay away from 3D cubes).
Third: Have a well concept of user experience. What will be touchable, how will the application react (animations, etc.). Have a great look (graphics) designed and be prepared to have them changed while implementing a multitouch application. Technically, make yourself familiar with gestures, APIs and whatever it takes (the hardware is a factor as well),
The second point can (and probably should) be seen as a personal taste...
Start with gestures
Then dive into the API