I have been using Three.js for a couple of weeks now and am really blown away by the power of this library!
Now, I would like to leverage it in order to run raytracing simulations for lower frequencies than light (RF). I figure it should be possible to alter or create new lights that would represent the sources of RF waves and then write a specific renderer that would take into account the interference aspects that apply in the RF context.
Is that the right approach? If so what functions/libraries should I focus on? Maybe all the raytracing/raycasting is already done and can be reused with little modification?
Any help appreciated!
Thanks
M.
Yes depending on what domain you are modelling, radiometric simulations can be done with ray tracing. Photometric renderers are Radiometric renderers but with the domain restricted to the visible spectrum. Usually you can simply replace words like 'Lux' with 'Flux' which can give the indication that the domain being modelled can be outside the visible spectrum.
Bear in mind, depending on what you wish model, you need to assume that the RF waves will behave in a similar vein to a ray/photon. i.e Diffraction or movement through vector fields will require intelligence beyond standard path tracing, instead look at ray marching used in volumetric renderers. Volumetric ray tracing is more akin to finite element analysis since we don't just calculate the intersection point of the ray, but also consider the ray propagation through a medium (vector field) and the effect this medium has on the resultant ray path.
P.S Sorry can't help with specifics of Threejs :( Never used it.
Related
I've recently been getting into character animation, but since I come from a programming background I like to approach everything procedurally.
I got started with rigging and animation using Blender and Cascadeur, but that approach seems very limited from my perspective. I saw that Houdini is capable of procedural animation, but the resources seem quite limited so I am not sure to what extent.
I would like to able to abstract motion logic and potentially reuse it, layer it, and use custom logic to drive bones. Also, I would like to be able to use noise and custom blending functions (spring for example). In some cases block out the base motion traditionally and layer procedural motion and in other cases define the base motion logically and again layer other motion on top.
Example:
I want to animate a cork flip and drive the height of the flip based on the angular momentum of the sweeping leg and also drive the angular momentum of the rotation based on the relative position between the hand and elbow joints, and the chest bone. This way I can adjust certain parameters and get different motions. Obviously there are many holes in that example, but that is the gist of what I am looking to achieve.
Is Houdini what I am looking for and if so, what resources could I look into?
Or should I be looking into custom solutions?
Are there any known techniques or approaches for achieving what I've given as an example (in Houdini or as a custom solution)?
Many certain resources about raytracing tells about:
"shoot rays, find the first obstacle to cut it"
"shoot secondary rays..."
"or, do it reverse and approximate/interpolate"
I didnt see any algortihm that uses a diffusion algorithm. Lets assume a point-light is a point that has more density than other cells(all space is divided into cells), every step/iteration of lighting/tracing makes that source point to diffuse into neighbours using a velocity field and than their neighbours and continues like that. After some satisfactory iterations(such as 30-40 iterations), the density info of each cell is used for enlightment of objects in that cell.
Point light and velocity field:
But it has to be a like 1000x1000x1000 size and this would take too much time and memory to compute. Maybe just computing 10x10x10 and when finding an obstacle, partitioning that area to 100x100x100(in a dynamic kd-tree fashion) can help generating lighting/shadows for acceptable resolution? Especially for vertex-based illumination rather than triangle.
Has anyone tried this approach?
Note: Velocity field is here to make light diffuse to outwards mostly(not %100 but %99 to have some global illumination). Finite-element-method can make this embarassingly-parallel.
Edit: any object that is hit by a positive-density will be an obstacle to generate a new velocity field around the surface of it. So light cannot go through that object but can be mirrored to another direction.(if it is a lens object than light diffuse harder through it) So the reflection of light can affect other objects with a higher iteration limit
Same kd-tree can be used in object-collision algorithms :)
Just to take as a grain of salt: a neural-network can be trained for advection&diffusion in a 30x30x30 grid and that can be used in a "gpu(opencl/cuda)-->neural-network ---> finite element method --->shadows" way.
There's a couple problems with this as it stands.
The first problem is that, fundamentally, a photon in the Newtonian sense doesn't react or change based on the density of other photons around. So using a density field and trying to light to follow the classic Navier-Stokes style solutions (which is what you're trying to do, based on the density field explanation you gave) would result in incorrect results. It would also, given enough iterations, result in complete entropy over the scene, which is also not what happens to light.
Even if you were to get rid of the density problem, you're still left with the the problem of multiple photons going different directions in the same cell, which is required for global illumination and diffuse lighting.
So, stripping away the problem portions of your idea, what you're left with is a particle system for photons :P
Now, to be fair, sudo-particle systems are currently used for global illumination solutions. This type of thing is called Photon Mapping, but it's only simple to implement a direct lighting solution using it :P
I used ReconstructMe
to scan my first half body (arm and head). The result I got is a 3d mesh. I open them in 3dsmax. What I need to do now is to add animation/motion to the 3d arm and head.
I think ReconstructMe created a mesh. Do I need to convert that mesh to a 3d object before adding animation? If so, how to do it?
Do I need to seperate the head and arm to add different animation to them? How to do it?
I am a beginner in 3ds max. I am using 3ds max 2012, student edition.
Typically you would set up bones, and link the mesh to the bones with skin or physique modifier, then animate the bones as needed.
You can have 1 mesh, or separate meshes, depends on your needs.
For setting up the rigging, it would be good to utilize a tutorial like this
http://www.digitaltutors.com/11/training.php?pid=332
I find Digital Tutors to be very concise and detailed for anybody to grasp the concepts if your patient enough. Depending on the motion you will like some parts of the bones will require FK (forward kinematics) or IK (inverse kinematics) or a mixture of both FK/IK control in areas like the elbows of the arms etc.
Certain other parts of the character would also like the ability to utilize CAT controls. Through the whole rigging process the biggest foundation or theory to maintain is hierarchy and the process of parenting the controls/linking correctly.
Also your meshes topo needs to be correct, when scanning from an outside source you will get either a. a lot of triangles or b. bad edge flow, before the rigging process make sure to take the time to get your scan's topology to the correct state it should be in.
I am hacking a vacuum cleaner robot to control it with a microcontroller (Arduino). I want to make it more efficient when cleaning a room. For now, it just go straight and turn when it hits something.
But I have trouble finding the best algorithm or method to use to know its position in the room. I am looking for an idea that stays cheap (less than $100) and not to complex (one that don't require a PhD thesis in computer vision). I can add some discrete markers in the room if necessary.
Right now, my robot has:
One webcam
Three proximity sensors (around 1 meter range)
Compass (no used for now)
Wi-Fi
Its speed can vary if the battery is full or nearly empty
A netbook Eee PC is embedded on the robot
Do you have any idea for doing this? Does any standard method exist for these kind of problems?
Note: if this question belongs on another website, please move it, I couldn't find a better place than Stack Overflow.
The problem of figuring out a robot's position in its environment is called localization. Computer science researchers have been trying to solve this problem for many years, with limited success. One problem is that you need reasonably good sensory input to figure out where you are, and sensory input from webcams (i.e. computer vision) is far from a solved problem.
If that didn't scare you off: one of the approaches to localization that I find easiest to understand is particle filtering. The idea goes something like this:
You keep track of a bunch of particles, each of which represents one possible location in the environment.
Each particle also has an associated probability that tells you how confident you are that the particle really represents your true location in the environment.
When you start off, all of these particles might be distributed uniformly throughout your environment and be given equal probabilities. Here the robot is gray and the particles are green.
When your robot moves, you move each particle. You might also degrade each particle's probability to represent the uncertainty in how the motors actually move the robot.
When your robot observes something (e.g. a landmark seen with the webcam, a wifi signal, etc.) you can increase the probability of particles that agree with that observation.
You might also want to periodically replace the lowest-probability particles with new particles based on observations.
To decide where the robot actually is, you can either use the particle with the highest probability, the highest-probability cluster, the weighted average of all particles, etc.
If you search around a bit, you'll find plenty of examples: e.g. a video of a robot using particle filtering to determine its location in a small room.
Particle filtering is nice because it's pretty easy to understand. That makes implementing and tweaking it a little less difficult. There are other similar techniques (like Kalman filters) that are arguably more theoretically sound but can be harder to get your head around.
A QR Code poster in each room would not only make an interesting Modern art piece, but would be relatively easy to spot with the camera!
If you can place some markers in the room, using the camera could be an option. If 2 known markers have an angular displacement (left to right) then the camera and the markers lie on a circle whose radius is related to the measured angle between the markers. I don't recall the formula right off, but the arc segment (on that circle) between the markers will be twice the angle you see. If you have the markers at known height and the camera is at a fixed angle of inclination, you can compute the distance to the markers. Either of these methods alone can nail down your position given enough markers. Using both will help do it with fewer markers.
Unfortunately, those methods are imperfect due to measurement errors. You get around this by using a Kalman estimator to incorporate multiple noisy measurements to arrive at a good position estimate - you can then feed in some dead reckoning information (which is also imperfect) to refine it further. This part is goes pretty deep into math, but I'd say it's a requirement to do a great job at what you're attempting. You can do OK without it, but if you want an optimal solution (in terms of best position estimate for given input) there is no better way. If you actually want a career in autonomous robotics, this will play large in your future. (
Once you can determine your position you can cover the room in any pattern you'd like. Keep using the bump sensor to help construct a map of obstacles and then you'll need to devise a way to scan incorporating the obstacles.
Not sure if you've got the math background yet, but here is the book:
http://books.google.com/books/about/Applied_optimal_estimation.html?id=KlFrn8lpPP0C
This doesn't replace the accepted answer (which is great, thanks!) but I might recommend getting a Kinect and use that instead of your webcam, either through Microsoft's recently released official drivers or using the hacked drivers if your EeePC doesn't have Windows 7 (presumably it does not).
That way the positioning will be improved by the 3D vision. Observing landmarks will now tell you how far away the landmark is, and not just where in the visual field that landmark is located.
Regardless, the accepted answer doesn't really address how to pick out landmarks in the visual field, and simply assumes that you can. While the Kinect drivers may already have feature detection included (I'm not sure) you can also use OpenCV for detecting features in the image.
One solution would be to use a strategy similar to "flood fill" (wikipedia). To get the controller to accurately perform sweeps, it needs a sense of distance. You can calibrate your bot using the proximity sensors: e.g. run motor for 1 sec = xx change in proximity. With that info, you can move your bot for an exact distance, and continue sweeping the room using flood fill.
Assuming you are not looking for a generalised solution, you may actually know the room's shape, size, potential obstacle locations, etc. When the bot exists the factory there is no info about its future operating environment, which kind of forces it to be inefficient from the outset.
If that's you case, you can hardcode that info, and then use basic measurements (ie. rotary encoders on wheels + compass) to precisely figure out its location in the room/house. No need for wifi triangulation or crazy sensor setups in my opinion. At least for a start.
Ever considered GPS? Every position on earth has a unique GPS coordinates - with resolution of 1 to 3 metres, and doing differential GPS you can go down to sub-10 cm range - more info here:
http://en.wikipedia.org/wiki/Global_Positioning_System
And Arduino does have lots of options of GPS-modules:
http://www.arduino.cc/playground/Tutorials/GPS
After you have collected all the key coordinates points of the house, you can then write the routine for the arduino to move the robot from point to point (as collected above) - assuming it will do all those obstacles avoidance stuff.
More information can be found here:
http://www.google.com/search?q=GPS+localization+robots&num=100
And inside the list I found this - specifically for your case: Arduino + GPS + localization:
http://www.youtube.com/watch?v=u7evnfTAVyM
I was thinking about this problem too. But I don't understand why you can't just triangulate? Have two or three beacons (e.g. IR LEDs of different frequencies) and a IR rotating sensor 'eye' on a servo. You could then get an almost constant fix on your position. I expect the accuracy would be in low cm range and it would be cheap. You can then map anything you bump into easily.
Maybe you could also use any interruption in the beacon beams to plot objects that are quite far from the robot too.
You have a camera you said ? Did you consider looking at the ceiling ? There is little chance that two rooms have identical dimensions, so you can identify in which room you are, position in the room can be computed from angular distance to the borders of the ceiling and direction can probably be extracted by the position of doors.
This will require some image processing but the vacuum cleaner moving slowly to be efficiently cleaning will have enough time to compute.
Good luck !
Use Ultra Sonic Sensor HC-SR04 or similar.
As above told sense the walls distance from robot with sensors and room part with QR code.
When your are near to a wall turn 90 degree and move as width of your robot and again turn 90deg( i.e. 90 deg left turn) and again move your robot I think it will help :)
The occlusion algorithm is necessary in CAD and game industry. And they are different in the two industries I think. My questions are:
What kind of occlusion algorithms are applied respectively in the two indurstries?
and what is the difference?
I am working on CAD software development, and the occlusion algorithm we have adopted is - set the object identifier as its color (a integer) and then render the scene, at last, read the pixel to find out the visible objects. The performance is not so good, so I want to get some good ideas here. Thanks.
After read the anwsers, I want to clarify that the occlusion algorithms here means "occlusion culling" - find out visible surface or entities before send them into the pipeline.
With google, I have found a algorithm at gamasutra. Any other good ideas or findings? Thanks.
In games occlusion is done behind the scene using one of two 3D libraries: DirectX or OpenGL. To get into specifics, occlusion is done using a Z buffer. Each point has a Z component, points that are closer occlude points that are further away.
The occlusion algorithm is usually done in hardware by a dedicated 3D graphics processing chip that implements DirectX or OpenGL functions. A game program using DirectX or OpenGL will draw objects in 3D space, and have the OpenGL/DirectX library render the scene taking into account projection and occlusion.
It stuck me that most of the answers so far only discuss image-order occlusion.
I'm not entirely sure for CAD, but in games occlusion starts at a much higher level, using BSP trees, oct trees and/or portal rendering to quickly determine the objects that appear within the viewing frustum.
The term you should search on is hidden surface removal.
Realtime rendering usually takes advantage of one simple method of hidden surface removal: backface culling. Each poly will have a "surface normal" point that is pre-calculate at a set distance from the surface. By checking the angle of the surface normal with respect to the camera, you'd know that the surface is facing away, and therefore does not need to be rendered.
Here's some interactive flash-based demos and explanations.
Hardware pixel Z-Buffering is by far the simplest technique, however in high-density object scenes you can still be trying to render the same pixel multiple times, which may become a performance problem in some situations. - You certainly need to make sure that you're not mapping and texturing thousands of objects that just aren't visible.
I'm currently thinking about this issue in one of my projects, I've found this stimulated a few ideas:
http://www.cs.tau.ac.il/~dcor/Graphics/adv-slides/short-image-based-culling.pdf