Joystick, gamepad or 3D mouse support in Three.js - user-interface

An open question to the three.js community.
As far as I know, the Three.js framework has different "enhancements" like OrbitControls.js that allow us to use the mouse or a touchscreen to explore a 3D scene.
Now my question:
Is out there another "enhancement" for Three.js that allows the usage of joysticks, gamepads or 3D mouses?
If not, how easy or difficult do you think it can be to implement their functionality?

The link by juagicre is useful.
Here are some more
This one checks that signals from your connected gamepad are being received:-
html5gamepad.com
html5rocks.com gamepad tester with image
stemkoski html5 demo
stemkoski THREE.js demo.
html5rocks tuorial
I just picked up a cheap unbranded USB gamepad in a thrift shop today. Using Firefox 38.0.5 or Chrome 47.0.2526.58 beta-m) it works for (1) and (2) but not yet for (3), (4), (5).
The official THREE.js website Examples does not seem to include a gamepad example (look in the section /misc).

Here is some info regarding the gamepad state in the web.
Looks like nowadays there is still a way to wait until this gets widely supported.
Have fun!

Related

Creating An Interactive Timeline

I want to build a 3D design program, just like tinkercad. However in this program user can create keyframes and play animation through a timeline in UI.
This is nothing new, many desktop applications can do this (e.g Blender, 3DSMax, AfterFX...). However, I couldn't find an example project or anything that comes handy in documentations. Is it possible to make it happen in three.js?
Have a look at GreenSock's GSAP, it is a general purpose HTML5 animation library, which can easily be applied to a three.js project. This is a tutorial about creating Timelines using GSAP
You might also want to check out the Under Neon Lights editor which uses Frame.js

Three.js browser compatibility

I was reading the Three.js wikipedia page and it states "Three.js runs in all browsers supported by WebGL." If you use canvas renderer will the application/game created with Three.js run in browsers that support canvas but not webGL. Also are there any know issues with Three.js and mobile browsers.
Actually all browsers are supported which have support for canvas. We do not support polyfilled canvas. Mainly because most of the time, we use other things beside the canvas that are not implemented by the browser.
Checkout the browser compatibility list here:
http://caniuse.com/webgl
There's another site with a pretty neat breakdown of the OS + device/browser combination support for WebGL:
http://webglstats.com
Edit: To answer your second question on mobile, problems will be unlikely if the mobile browser supports WebGL since WebGL is basically based off of Open GL 2.0 ES (Embedded Systems). "ES" is mainly targeted things like mobile devices
You can take a look at WebGL plugin for Internet Explorer 10 and below.

Generate and post Multitouch-Events in OS X to control the mac using an external camera

I am currently working on a research project for my university. The goal is to control a Mac using the Microsoft Kinect camera. Another student is writing the Kinect driver (which will be mounted somewhere on the ceiling or the wall behind the Mac and which outputs the position of all fingers on the Macs screen).
It is my responsibility to use that finger-positions and react on them. The goal is to use one single finger to control the mouse and react on multiple fingers the very same way, like they are on the trackpad.
I thought that this is going to be easy and straight forward, but its not. It is actually very easy to control the mouse cursor using one finger (using CGEvent), but unfortunately there is no public API for creating and posting Multitouch-Gestures to the system.
I've done a lot of research, including catching all CGEvents using an event tap at the lowest possible position and trying to disassemble them, but no real progress so far.
Than I stumbled over this and realized, that even the lowest position for an event tap is not deep enough:
Extending Functionality of Magic Mouse: Do I Need a kext?
When I got it right, the built-in Trackpad (and the MagicMouse and the MagicTrackpad) communicates over a KEXT-Kernel-Extension with the private MultitouchSupport-framework, which is generating and posting the incoming data in some way to the OS.
So I would need to use private APIs from the MultitouchSupport.framework to do the very same thing like the Trackpad does, right?
Or would I need to write a KEXT-Extension?
And if I need to use the MultitouchSupport-framework:
How can I disassemble it to get the private APIs? (I know class-dump, but that only works on Objective-C-frameworks, which this framework is not)
Many thanks for any response!
NexD.
"The goal is to use one single finger to control the mouse and react on multiple fingers the very same way" here if I understand what you are trying to do is you try to track fingers from Kinect. But the thing is Kinect captures only major body joints. But you can do this with other third party libraries I guess. Here is a sample project I saw. But its for windows. Just try to get the big picture there http://channel9.msdn.com/coding4fun/kinect/Finger-Tracking-with-Kinect-SDK-and-the-Kinect-for-XBox-360-Device

Swing Animations: is there a GUI library animation like jquery (javascript) but for java?

Is there a library available to animate and make java GUI look nice, with animations and transitions like jquery does?
something like css hover, make a panel animate, round the corners of panels so that they look more sophisticated... etc. etc.
So far my exploration has taken me to try java css by Ethan Nicholson, which appears to have been discontinued from the java.net site? and does not appear to have been loaded anywhere else.
And then there is the Chet Haase work on the timing framework for animating swing, and other nice stuff using the SwingX framework.
Anyone got any advice on an open source animation library specifically designed for swing components? Or am I dreaming and should I be hauling out the wallet to pay for someone else's hard labour?
Kirill Grouchnikov created an excellent animation library called Trident. More information can be found at http://kenai.com/projects/trident/pages/Home

Using Windows Phone 7 pinching in XNA

I'm trying to figure out how to implement pinch-to-zoom functionality. My problem is I'm not sure how to do it algorithmically.
I have the pinch positions of both fingers and the amount they've moved since the last frame. At first I tried making the pinch amount the delta of the distance between the two fingers however every way I've done it around this concept has been unyieldly.
Even if I manage to get the pinching working semi-decently I still have the problem of the zoom direction and how to make the image zoom in on the center of the pinch area...
Is there a proper way of implementing such functionality?
I also recommend reading this, a really well implemented "gold standard" pinch:
http://adtsai.blogspot.com/2010/09/pinch-zooming-using-xna4-on-wp7-getting.html
It also makes reference to a pinch to zoom add-in so you can test pinching on the emulator with just a mouse.
What you want to do is use the built-in gesture API (specifically Pinch and PinchComplete). That way, you can take advantage of the heuristics that the xna/wp7 team has already built in. Your app will feel "more native" this way because it will react like the rest of the OS in reaction to a pinch gesture.
Nick Gravelyn has a great intro to the gesture API here:
http://blogs.msdn.com/b/nicgrave/archive/2010/07/12/touch-gestures-on-windows-phone-7.aspx
I found few links of 3rd-party solution...
1) Dual-Touch SDK for Resistive Screens V1.0 Beta, Rotation Alpha
2) SciLor's HD2 / Leo Multitouch .NET CF DLL
I tried using Dual-Touch SDKs which were working fine for Resistive screen mobiles but not for Capacitive Screen mobiles.

Resources