I have a 3D surface created by the ListPlot3D in mathematica. The surface itself is relatively planar and in the arbitrary plane xy. I would like to place this surface on top of a molecule either generated from the ChemicalData function or inputted from a .pdb or other molecule input. This molecule is also planar and placed again the in the xy plane. I would like these two 3D objects to be separated by some z distance.
The end hope is that I will have a potential energy surface placed above this planar molecule that will still be rotatable in 3D. I have been using the Show and Graphics 3D options to no success. The x,y points on the surface correspond to x,y points on the 3D molecule, however they can easily be scaled and shifted as needed. As an option of last resort I suppose I could input the x,y,z coordinates of the atoms and use the PointListPlot3D command with various options to mimic the look of the molecule but this seems to be much more complicated then it needs to be.
Another approach that could be possible is if both of the 3D objects are converted to 3D boxes and simply stacked on top of each other. However I have not found this functionality as of yet in mathematica. Does anyone have any ideas on how to do this?
PES = ListPlot3D[{{0., 0., -2.04900365`},..., {0., 0.3, -2.05743098`}}]
Show[Graphics3D[ChemicalData["Benzene","MoleculePlot"]],PES]
The issue was the scale of the molecule versus the scale of the energy surface.
The units, as best I can tell, are in picometers. However their atomic distance seem to be off by 3%.
As an update to this it became much simpler to take the xyz coordinate of the molecule and hand generate the objects. It has been some time but I believe if you only state:
ChemicalData["Benzene","MoleculePlot"]
Mathematica will show you the format. If you do many of these a fairly simple python script can be made.
What Szabolcs said and I also did not get from your question why wouldn't something like this work?
Show[Graphics3D[
Rotate[ChemicalData["Caffeine", "MoleculePlot"][[1]],
45 Degree, {1, .5, 0}]],
Plot3D[-200 + 50 Sin[x*y/10000], {x, -100 Sqrt[3*Pi],
100 Sqrt[3*Pi]}, {y, -100 Sqrt[3*Pi], 100 Sqrt[3*Pi]},
ColorFunction -> "TemperatureMap"], Axes -> True]
ListPlot3D returns a Graphics3D object, so you should be able to combine it with other Graphics3D objects...
lp = ListPlot3D[ RandomReal[{}, {50, 3}], Mesh -> None];
sp = Graphics3D[Sphere[]];
Show[sp, lp, Boxed -> False]
although getting everything the same size will be the challenge...
Related
I'm trying to understand the pattern (looks similar to an animal print) that shows when two different colored planes are plotted on (almost) the same plane. What is the formula that SageMath uses--using three.js--to create the pattern shown in the graph? The SageMath question/support area sent me to this support section for answers.
Example: Here one plane is slightly larger-- which makes SageMath show them both, but with a pattern. Also, as you move/manipulate the graph with the mouse, the pattern changes. What formula or information does SageMath (three.js) use to show the pattern?
I used the Sage Cell Server online to plot this (below) at https://sagecell.sagemath.org/:
M = implicit_plot3d(lambda x,y,z: x, (-15,15), (-15,15), (-15,15), rgbcolor= (0.0, 1.0, 0.0), frame=true)
N = implicit_plot3d(lambda x,y,z: x, (-15,15), (-15,15), (-15,15.5), rgbcolor= (0.0, 0.0, 1.0), frame=true)
M+N
Thanks for any information you can provide!
I'm not very familiar with Sage, but this seems to be a case of z-fighting.
The idea is basically that the planes are so close to each other (and in this case occupy the same space!), that the camera that is used to draw the scene, has trouble selecting a particular plane for each of the pixels.
And so the "pattern" is just random glitches, and it changes when you move, because the computation for selecting which plane is "in front" changes for each angle.
You can read in a lot more detail here.
Now, the pattern does remind me of "noise patterns", which you might be interested in. There's a lot of ressources for that, a good place to start could be the book of shaders.
I am using a handpose estimation model that uses the webcam to generate (x, y, z) coordinates for each joint from a moving hand (the z is estimated accurately). I also have a .glb character with a full T-bone skeleton structure, with hands, that is made using Blender.
What I cannot figure out is how to use these real-time data points to animate the imported 3D character’s hand in ThreeJS . The (x, y, z) is in 3D cartesian plane and from what I’ve read in the docs, ThreeJS uses Euler/Quaternion angles for rotation (correct me if I’m wrong). I’m at an impasse right now because I am unsure of how to convert this into angular data.
I am fairly new to animation so please do let me know if there are other libraries that can help me do this in an easier fashion.
I think you are looking for inverse kinematics. It calculates variable joint parameters (angles or scales) depending on the end of the chain. The end of the chain i yours xyz position in space. The chain is arm for example. The joints are virtual joints of yours characters rig.
I have some images captured from an wide angle appx. (180 degree) camera.
I am using opencv 2.4.8 which gives some details about camera matrix n distortion matrix.
MatK = [537.43775285, 0, 327.61133999], [0, 536.95118778, 248.89561998], [0, 0, 1]
MatD = [-0.29741743, 0.14930169, 0, 0, 0]
And this info I have used further to remove the distortion.
But the result is not as expected.
I have attached some input images of chess board which i have used to calibrate.
Or Is there any other tools or library by which it can be removed.
input images
from a Normal Camera or even captured by my smart phone
This is not an answer to the question, but something about the "discussion" of distortion and planarness.
In reality you have some straight lines on a pattern:
With (nearly any) lens you'll get some kind of distortion so that those straight lines aren't straight anymore after projection to your image. This effect is much stronger for wide angle lenses. You could expect something like this (for wide angle stronger but similar):
But the images you provided look more like this, which can be because of your pattern wasnt really planar on the ground, or because the lens has some additional "hills" on your lens.
The whole point of the calibration process is to tell OpenCV what a straight line looks like under distortion. A chess board is used to present a number of straight lines that are easy for OpenCV to detect. In your image, these lines are simply not straight. I'm moderately sure that OpenCV also needs square boxes.
So, use a real chess board pattern. Print it out, glue it to a piece of wood or hard plastic or whatever. But make sure it's a regular chessboard pattern on a level plane.
The most common method (used by the Oculus Rift Runtime for example) draws a fine enough textured grid for which the texture coordinates or the grid node positions are chosen to compensate the distortion. To obtain the grid normally one fits a polynomial or a spline to some reference picture. For example the checkerboard in your camera is a common calibration target.
I'm following examples from an intro to webgl book (WebGL Programming Guide: Interactive 3D Graphics Programming with WebGL) and I'm having trouble understanding why an orthographic projection helps solve this specific problem.
One of the examples has us changing the 'eye point' of how we're viewing 3 triangles by applying some matrix transformation. They show that if we change the viewpoint enough to the right (+X) that the triangle starts to disappear. Here is the exact webgl example from the book's website (Press right arrow key to rotate triangle): http://www.magic.ubc.ca/webgl-pg/uploads/examples/ch07/LookAtTrianglesWithKeys.html
The book says that this happens because "This is because you haven’t specified the visible range (the boundaries of what you can actually see) correctly."
To solve this they apply an orthographic projection matrix to each vertex first and the problem is then solved. Why does this solve the problem, how can a matrix transformation cause something which did not exist before to now be visible? Where can I find the full explanation as to why webgl chose to not display the triangle anymore?
The coordinate system in which the objects in OpenGL are rendered (screen space) has an range of [-1,1] for x, y, and z.
With viewMatrix.setLookAt(g_eyeX, g_eyeY, g_eyeZ, 0, 0, 0, 0, 1, 0); your example creates a transform matrix that is used to transforms the coordinates of the triangles from world space to camera space (the direction from which you look onto that object).
Because this transformation will change the coordinates of the triangles these may not be in the range of [-1,1] anymore, in your example this happens for the z coordinate (moving behind your screen).
To solve this issue you can use an orthographic to change the range of the z coordinates without changing the perspective look, only the current z values of the screen spaces are scaled.
I'm creating an HTML5 canvas 3D renderer, and I'd say I've gotten pretty far without the help of SO, but I've run into a showstopper of sorts. I'm trying to implement backface culling on a cube with the help of some normals calculations. Also, I've tagged this as WebGL, as this is a general enough question that it could apply to both my use case and a 3D-accelerated one.
At any rate, as I'm rotating the cube, I've found that the wrong faces are being hidden. Example:
I'm using the following vertices:
https://developer.mozilla.org/en/WebGL/Creating_3D_objects_using_WebGL#Define_the_positions_of_the_cube%27s_vertices
The general procedure I'm using is:
Create a transformation matrix by which to transform the cube's vertices
For each face, and for each point on each face, I convert these to vec3s, andn multiply them by the matrix made in step 1.
I then get the surface normal of the face using Newell's method, then get a dot-product from that normal and some made-up vec3, e.g., [-1, 1, 1], since I couldn't think of a good value to put in here. I've seen some folks use the position of the camera for this, but...
Skipping the usual step of using a camera matrix, I pull the x and y values from the resulting vectors to send to my line and face renderers, but only if they have a dot-product above 0. I realize it's rather arbitrary which ones I pull, really.
I'm wondering two things; if my procedure in step 3 is correct (it most likely isn't), and if the order of the points I'm drawing on the faces is incorrect (very likely). If the latter is true, I'm not quite sure how to visualize the problem. I've seen people say that normals aren't pertinent, that it's the direction the line is being drawn, but... It's hard for me to wrap my head around that, or if that's the source of my problem.
It probably doesn't matter, but the matrix library I'm using is gl-matrix:
https://github.com/toji/gl-matrix
Also, the particular file in my open source codebase I'm using is here:
http://code.google.com/p/nanoblok/source/browse/nb11/app/render.js
Thanks in advance!
I haven't reviewed your entire system, but the “made-up vec3” should not be arbitrary; it should be the “out of the screen” vector, which (since your projection is ⟨x, y, z⟩ → ⟨x, y⟩) is either ⟨0, 0, -1⟩ or ⟨0, 0, 1⟩ depending on your coordinate system's handedness and screen axes. You don't have an explicit "camera matrix" (that is usually called a view matrix), but your camera (view and projection) is implicitly defined by your step 4 projection!
However, note that this approach will only work for orthographic projections, not perspective ones (consider a face on the left side of the screen, facing rightward and parallel to the view direction; the dot product would be 0 but it should be visible). The usual approach, used in actual 3D hardware, is to first do all of the transformation (including projection), then check whether the resulting 2D triangle is counterclockwise or clockwise wound, and keep or discard based on that condition.