Paper.js, force Cartesian coordinate system with origin at center - html5-canvas

Using paper.js and loving it. I want to use the Cartesian coordinate system though, as opposed to the "document"-style system that is default.
When I draw on a canvas with plain js (without paper.js), I enforce the transformation like this (on $(document).ready()):
ctx = canvas.getContext("2d");
ctx.scale(1, -1);
ctx.translate(0, (-canvas.height/2));
But with the way paper.js runs in the browser, this doesn't work.
The view object in paper has a method transform(), yet the following code throws an error, even though the view object is clearly available by the output to the console:
console.log(view);
view.transform( new Matrix(1,0,0,-1,300,300) );
// console output:
// CanvasView {_context: CanvasRenderingContext2D, _eventCou......
// Uncaught TypeError: view.transform is not a function
I'm thinking I need to call transform upon some document load event? But I don't see any straightforward events such that in paper's API.
Is there a simple way to just simply apply a linear transformation to the whole view once, in the beginning, to get a Cartesian system?

Figured out the following works:
project.activeLayer.transform( new Matrix(1,0,0,-1,view.center.x, view.center.y) );

Related

Three.js calling clipAction.play() makes animated objects vanish

In Three.js, Calling action.play() makes objects just vanish, without any error or warning on the console.
I use THREE.ObjectLoader to load a JSON file created in blender. The srt (position/scale/quaternion) animation is in the generated file. As are the morphtargets. To optimise filesize I animated the srt as a series of null objects. The morphtargets tracks are in the main object, which I clone 5 times to build the characters (balloons to be exact).
I previously did extensive testing to introduce shape/morph animation. After being succesfull I finalised all the animations. Only to be trumped by the disappearing models. The srt (position/scale/quaternion) animation was working fine before. But after refactoring the code, to be less spagettied, upon calling action.play(). The objects just vanish, exactly then. Echoeing the mixers and the array containing the clips, everything looks correct (ie I see the tracks, the names are right etc). Also examining the newly generated JSON, it seems the same and correct (also I have not changed the SRT animations, only introduced shapeanimation)
So I am lost, and think this looks more and more like a bug. From previous experience I do know it works (or has worked).
I created a jsfiddle: https://jsfiddle.net/oompol/3ya6sqed/
[edit] I turned on the action.play and call the function from the link in the div [/edit] please note I commented out calling action.play(). So you see the load and init work. See the function listed below
function playScene(scene) {
for (parentName in srtMixers) {
var clpName = "balloon1_fly";
var clp = THREE.AnimationClip.findByName(animLib, clpName);
var action = srtMixers[parentName].clipAction(clp);
action.clampWhenFinished = true;
console.log("playScene:", clpName, clp, parentName, srtMixers);
//this is when the problem happens
action.play();
}
}
This is the JSON I am loading:
https://rawgit.com/bakajin/2e3d2f6a722103ed4aefd76f6250ec08/raw/28cad35c20060d478499c0cd40a2753611993720/oomp-scene_balloons-oomp-6.9.4.json
Ok,
there was something very wrong with the scaling indeed.
The io_three JSON exporter for Blender (r87 dev) writes incorrect matrix transformation data in the geometry object (really tiny scaling values). The animation track with the scaling keys were correctly written as 1,1,1. So all the objects just scaled out of view immediately.
Hard to see because the geometry has no separate scaling value but a matrix. Seems to happen when you set "Scene" to true on export.
Worked around the problem by entering the scaling value in the keyframe tracks. But this will only work if you have no scaling animation (so the keys are all one).
Meanwhile I have extensively edited the JSON by hand. Because this is not the only incorrect data. The formatting of the animation object is also wrong. The durations for the morphTargetInfluence Keys is also incorrect. The formatting of these keys is also not always correct.
Hope this helps some other ppl

three.js: Are there methods for loading Line, Point, and Text data (and combinations)?

I'm attempting to migrate my Coin3d geology visualization projects over to Three.js. I've experimented with the various loaders and have decided to use the JSON format & loader to load mesh data, but I cannot find a method for storing and loading lines, points, and text. I tried the VRMLLoader, but the following code:
var vloader = new THREE.VRMLLoader();
vloader.load('line.wrl', function (geometry) {
var line = new THREE.Line(geometry);
scene.add( line );
});
returns nothing, which isn't surprising, given that IndexedLineSet is not referenced in VRMLLoader.js (IndexedFaceSet, Cylinder, Cone, etc are there). The JSON Geometry format 4 and Model Format 3 are mesh-centric if not mesh-exclusive, and I wonder if there are plans to add something like
"data":{
"lines":[3,0,1,2,3...],
"points":[0,2,4,1,3...]
}
to the spec? In the meantime, does one of the other loaders support loading Lines, Points, and Text? If not--and I assume the answer is no--is the best way to go about this to hack the JSONLoader to read
"lines":[3,0,1,2,3...] # or whatever I want to call it
and if so, how would one go about doing so? In the loader callback, or would I have to make a custom my_JSONLoader.js?
I am currently working on support for IndexedLineSet in the VrmlParser project. VrmlParser uses a ThreeJs renderer for display: http://github.com/bartmcleod/VrmlParser

Using a FrameBufferObject with several Color Texture attachments

I'm implementing in my program the gaussian blur effect. To do the job I need to render the first blur information (the one on Y axis) in a specific texture (let's call it tex_1) and use this same information contained in tex_1 as input information for a second render pass (for the X axis) to fill an other texture (let's call it tex_2) containing the final gaussian blur result.
A good practice should be to create 2 frame buffers (FBOs) with a texture attached for each of them and linked both to GL_COLOR_ATTACHMENT0 (for example). But I just wonder one thing:
Is it possible to fill these 2 textures using the same FBO ?
So I will have to enable GL_COLOR_ATTACHMENT0 and GL_COLOR_ATTACHMENT1 and bind the desired texture to the correct render pass as follow :
Pseudo code:
FrameBuffer->Bind()
{
FrameBuffer->GetTexture(GL_COLOR_ATTACHMENT0)->Bind(); //tex_1
{
//BIND external texture to blur
//DRAW code (Y axis blur pass) here...
//-> Write the result in texture COLOR_ATTACHEMENT0 (tex_1)
}
FrameBuffer->GetTexture(GL_COLOR_ATTACHMENT1)->Bind(); //tex_2
{
//BIND here first texture (tex_1) filled above in the first render pass
//Draw code (X axis blur pass) here...
//-> Use this texture in FS to compute the final result
//within COLOR_ATTACHEMENT1 (tex_2) -> The final result
}
}
FrameBuffer->Unbind()
But in my mind there is a problem because I need for each render pass to bind an external texture as an input in my fragment shader. Consequently, the first binding of the texture (the color_attachment) is lost!
So does it exist a way to solve my problem using one FBO or do I need to use 2 separate FBOs ?
I can think of at least 3 distinct options to do this. Where the 3rd one will actually not work in OpenGL ES, but I'll explain it anyway because you might be tempted to try it otherwise, and it is supported in desktop OpenGL.
I'm going to use pseudo-code as well to cut down on typing and improve readability.
2 FBOs, 1 attachment each
This is the most straightforward approach. You use a separate FBO for each texture. During setup, you would have:
attach(fbo1, ATTACHMENT0, tex1)
attach(fbo2, ATTACHMENT0, tex2)
Then for rendering:
bindFbo(fbo1)
render pass 1
bindFbo(fbo2)
bindTexture(tex1)
render pass 2
1 FBO, 1 attachment
In this approach, you use one FBO, and attach the texture you want to render to each time. During setup, you only create the FBO, without attaching anything yet.
Then for rendering:
bindFbo(fbo1)
attach(fbo1, ATTACHMENT0, tex1)
render pass 1
attach(fbo1, ATTACHMENT0, tex2)
bindTexture(tex1)
render pass 2
1 FBO, 2 attachments
This seems to be what you had in mind. You have one FBO, and attach both textures to different attachment points of this FBO. During setup:
attach(fbo1, ATTACHMENT0, tex1)
attach(fbo1, ATTACHMENT1, tex2)
Then for rendering:
bindFbo(fbo1)
drawBuffer(ATTACHMENT0)
render pass 1
drawBuffer(ATTACHMENT1)
bindTexture(tex1)
render pass 2
This renders to tex2 in pass 2 because it is attached to ATTACHMENT1, and we set the draw buffer to ATTACHMENT1.
The major caveat is that this does not work with OpenGL ES. In ES 2.0 (without using extensions) it's a non-starter because it only supports a single color buffer.
In ES 3.0/3.1, there is a more subtle restriction: They do not have the glDrawBuffer() call from full OpenGL, only glDrawBuffers(). The call you would try is:
GLenum bufs[1] = {GL_COLOR_ATTACHMENT1};
glDrawBuffers(bufs, 1);
This is totally valid in full OpenGL, but will produce an error in ES 3.0/3.1 because it violates the following constraint from the spec:
If the GL is bound to a draw framebuffer object, the ith buffer listed in bufs must be COLOR_ATTACHMENTi or NONE.
In other words, the only way to render to GL_COLOR_ATTACHMENT1 is to have at least two draw buffers. The following call is valid:
GLenum bufs[2] = {GL_NONE, GL_COLOR_ATTACHMENT1};
glDrawBuffers(bufs, 2);
But to make this actually work, you'll need a fragment shader that produces two outputs, where the first one will not be used. By now, you hopefully agree that this approach is not appealing for OpenGL ES.
Conclusion
For OpenGL ES, the first two approaches above will work, and are both absolutely fine to use. I don't think there's a very strong reason to choose one over the other. I would recommend the first approach, though.
You might think that using only one FBO would save resources. But keep in mind that FBOs are objects that contain only state, so they use very little memory. Creating an additional FBO is insignificant.
Most people would probably prefer the first approach. The thinking is that you can configure both FBOs during setup, and then only need glBindFramebuffer() calls to switch between them. Binding a different object is generally considered cheaper than modifying an existing object, which you need for the second approach.
Consequently, the first binding of the texture (the color_attachment)
is lost!
No, it isn't. Maybe your framebuffer class works that way, but then, it would be a very bad abstraction. The GL won't detach a texture from an FBO just because you bind this texture to some texture unit. You might get some undefined results if you create a feedback loop (rendering to a texture you are reading from).
EDIT
However, as #Reto Koradi pointed out in his excellent answer, (and his comment to this one), you can't simply render to a single color attachment in unextended GLES1/2, and need some tricks in GLES3. As a result, The fact I'm pointing out here is still true, but not really helpful for the ultimate goal you are trying to achieve.

Alternatives to using a MovieClip or BitmapData for an image?

I've been trying for two days to find an alternative to loading an image into my current project. I am using Adobe Flash Professional CS6 as my IDE and Animation program. I would like to be able to display an image in my application. What I am trying to do is have the image display onto the screen, the user enters the PLU associated with the image, and if the PLU is right then they receive a point. I have everything else already to go, but I just can't find an efficient way to deal with loading the image.
Right now I'm using this to accomplish getting my image on the display:
var myDisp:Layer0 = new Layer0();
var bmp:Bitmap = new Bitmap(myDisp);
spDispBox.addChild(bmp);
The above code works just find, but the limitation I can't get around is that I'm going to have to import each image into the library and then consecutively code each part in. I wanted to stick to OOP and streamline this process, I just don't know where I should turn to in order to accomplish my project goal. I'm more than happy to give more information. Thanks in advance, everyone.
July, 26, 2014 - Update: I agree, now, that XML is the way to go. I'm just having a hard time getting the grasp of loading an external XML file. I'm following along, but still not quite getting the idea. I understand about creating a new XML data object, Loader, and URLRequest. It's just loading the picture. I've been able to get output by using trace in the function to see that the XML is loaded, but when I go to add the XML data object to the stage I'm getting a null object reference.
I'm going to try a few more things, I just wanted to update the situation. Thanks again everyone.
it seems like these images are in your FLA library. To simplify your code you can make a singleton class which you can name ImageFactory (factory design pattern) and call that when needing an image which will return a Sprite (lighter than a MovieClip)
spDispBox.addChild( ImageFactory.getImageA() ); // returns a Sprite with your image
and in your ImageFactory
public function getImageA():DisplayObject {
var image:Layer0 = new Layer0(); // image from the FLA library
var holder:Sprite = new Sprite();
holder.addChild( new Bitmap( image ) );
return holder;
}
also recommend using a more descriptive name than Layer0

How to generate texture mapping images?

I want to put/wrap images to 3D objects. To keep things simple and fast, instead of using(and learning) a 3D library I want to use mapping images. Mapping images are used in such a way:
So you generate the mapping images once for each object and use the same mapping for all images you want to wrap.
My question is how can I generate such mapping images (given the 3D model)? Since I don't know about the terminology my searches failed me. Sorry if I am using the wrong jargon.
Below you can see a description of the workflow.
I have the 3D model of the object and the input image, i want to generate mapping images that I can use to generate the textured image.
I don't even know where to start, any pointers are appreciated.
More info
My initial idea was to somehow wrap a identity mappings (see below) using an external program. I have generated horizontal and vertical gradient images in Photoshop just to see if mapping works using photoshop generated images. The result doesn't look good. I wasn't hopeful but it was worth a shot.
input
mappings (x and y), they just resize the image, they don't do anything fancy.
result
as you can see there are lots of artifacts. Custom mapping images I have generated by warping the gradients even looks worse.
Here is some more information on mappings: http://www.imagemagick.org/Usage/mapping/#distortion_maps
I am using OpenCV remap() function for mapping.
if i understand you right here, you want to do all of it in 2D ?
calling warpPerspective() for each of your cube surfaces will be much more successful, than using remap()
pseudocode outline:
// for each surface:
// get the desired src and dst polygon
// the src one is your texture-image, so that's:
vector<Point> p_src(4), p_dst(4);
p_src[0] = Point(0,0);
p_src[1] = Point(0,src.rows-1);
p_src[2] = Point(src.cols-1,0);
p_src[3] = Point(src.cols-1,src.rows-1);
// the dst poly is the one you want textured, a 3d->2d projection of the cube surface.
// sorry, you've got to do that on your own ;(
// let's say, you've come up with this for the cube - top:
p_dst[0] = Point(15,15);
p_dst[1] = Point(44,19);
p_dst[2] = Point(56,30);
p_dst[3] = Point(33,44);
// now you need the projection matrix to transform from one to another:
Mat proj = getPerspectiveTransform( p_src, p_dst );
// finally, you can warp your texture to the dst-polygon:
warpPerspective(src, dst, proj, dst.size());
if you can get hold of the 'Learning Opencv' book, it's described around p 170.
final word of warning, since youre complaining about artefacts, - yes, it'll all look pretty cheesy, 'real' 3d engines do a lot of work here, subpixel-uv mapping, filtering,
mipmapping, etc. if you want it to look nice, consider using the 'real' thing.
btw, there's nice opengl support built into opencv
To achieve what you are trying to do, you need to render the 3D-models UV to a texture. It will be easier to learn to render 3D than to do things this way. Especially since there are a lot of weaknesses in your aproach. difficult to to lighting and problems til the depth-buffer will be abundant.
Assuming all your objects shul ever only be viewed from one angle, you need to render each of them to 3 textures:
UV-map
Normal-map
Depth-map (to correct the depth-buffer)
You will still have to do shading in order to draw these to look like your object, and I don't even know how to do the depth-buffer-thing, I just know it can be done.
So in order to avoid learning 3D, your will have to learn all the difficult parts of 3D-rendering. Does not seem the easier route...

Resources