I'm currently working on a little game written in JS using Pixi.js(https://github.com/pixijs). One Problem has occured currently, I'm trying to implement pixel exact collision between shapes, while I was programing a little I noticed that all the pixel RBGA values of my images are just 0.
I searched on the web but for a while but the only reason for those Problems I could find was that the canvas was tainted because of CORS(Pixel RGB values are all zero).
But this can't be the reason in my case because I created the sprites myself, I'm not loading them from other (any) domains or something like that.
Could this be a problem with the images? How do I avoid that? I will append some code that works if I use other images (some images I downloaded for tests).
const app = new PIXI.Application({width: 500, height: 500});
document.body.appendChild(app.view);
PIXI.loader.add("sprites/test.png")
.load(() => {
let img = new PIXI.Sprite(PIXI.loader.resources["sprites/test.png"].texture);
app.stage.addChild(img);
console.log(app.renderer.extract.pixels(img));
});
Edit: I tried getting the RGBA values using a short Java Program btw, same problem. Every single value is zero.
Related
Preferably, I'd like to use an array, iterating over each pixel and setting the R G B values.
And I don't think that I can use HTML canvas in any way. I'm hoping to build it right on top of a Google Doc without additional libraries or references to external websites.
Everything I have found on the Image Class, type is about positioning or resizing, but not helpful for stating the image.
ImageItem .setImage() looks promising, but is not particularly descriptive.
You can implement your own encoding algorithm (or migrate someone else's) and transform your pixels array into an image blob compatible with the ImageItem.setImage() method.
Hello everyone so I've been trying to render a book page that is attached to your hand in VR (testing on oculus-go) using A-FRAME. Initially I tried using a plane and applying the text to it using the text attribute and then defining its value, alignment, font etc... Everything worked fine enough however the text gets "jagged edges" that seem to get worse the more you move your hand (which is basically impossible to not do) making it extremely bad for a long text-form such as a book page.
Then I tried exploring an alternative by using the aframe-html-shader by mayognaise. By creating an html div and using css it's the perfect solution in terms of customization, alignment and etc and when I render it, the text doesn't get any "jagged edges" anymore (since it's basically a texture).
However, it gets blurry enough that it becomes tiresome for long reads.
I've tried everything I could to increase its sharpness however it keeps being blurry which makes absolutely no sense to me.
What I've tried:
Increasing the size of the object the texture applies to and then scaling it back after the render - result: same thing...
Increasing the size of the canvas or the texture inside the aframe-html-shader.js - result: the same thing... however some of the tinkering attempts seem to trigger a "image too big (...) scaling down to 4000" (4000 something I don't recall the exact value) which seem to indicate the canvas is already being rendered at full resolution
Switching from Mayognaise aframe-html-shader.js to wildlifela fork (which already has an option of "scale" on the shader) and applying "canvasScale: 2" - result: same thing...
Using a 4000px width html element as the object to render from, increasing the font accordingly- result: same thing...
I'm out of ideas and really don't understand why I can't get good enough text out of the html shader since if the text is within an image and I use that same image as a texture, the text comes out perfectly readable.
Need some help from all the A-Frame experts and developers over here!
Thank you all in advance!
Increasing the size of the canvas or the texture inside the aframe-html-shader.js - result: the same thing... however some of the tinkering attempts seem to trigger a "image too big (...) scaling down to 4000" (4000 something I don't recall the exact value) which seem to indicate the canvas is already being rendered at full resolution
It wasn't too big. This is because textures needs to be a power of two size (e.g., 4096x4096).
The standard text component should be clearest though. A-Frame master branch has a fix to make text look clearer, might help. https://github.com/aframevr/aframe/commit/8d3f32b93633e82025b4061deb148059757a4a0f
In Three.js, Calling action.play() makes objects just vanish, without any error or warning on the console.
I use THREE.ObjectLoader to load a JSON file created in blender. The srt (position/scale/quaternion) animation is in the generated file. As are the morphtargets. To optimise filesize I animated the srt as a series of null objects. The morphtargets tracks are in the main object, which I clone 5 times to build the characters (balloons to be exact).
I previously did extensive testing to introduce shape/morph animation. After being succesfull I finalised all the animations. Only to be trumped by the disappearing models. The srt (position/scale/quaternion) animation was working fine before. But after refactoring the code, to be less spagettied, upon calling action.play(). The objects just vanish, exactly then. Echoeing the mixers and the array containing the clips, everything looks correct (ie I see the tracks, the names are right etc). Also examining the newly generated JSON, it seems the same and correct (also I have not changed the SRT animations, only introduced shapeanimation)
So I am lost, and think this looks more and more like a bug. From previous experience I do know it works (or has worked).
I created a jsfiddle: https://jsfiddle.net/oompol/3ya6sqed/
[edit] I turned on the action.play and call the function from the link in the div [/edit] please note I commented out calling action.play(). So you see the load and init work. See the function listed below
function playScene(scene) {
for (parentName in srtMixers) {
var clpName = "balloon1_fly";
var clp = THREE.AnimationClip.findByName(animLib, clpName);
var action = srtMixers[parentName].clipAction(clp);
action.clampWhenFinished = true;
console.log("playScene:", clpName, clp, parentName, srtMixers);
//this is when the problem happens
action.play();
}
}
This is the JSON I am loading:
https://rawgit.com/bakajin/2e3d2f6a722103ed4aefd76f6250ec08/raw/28cad35c20060d478499c0cd40a2753611993720/oomp-scene_balloons-oomp-6.9.4.json
Ok,
there was something very wrong with the scaling indeed.
The io_three JSON exporter for Blender (r87 dev) writes incorrect matrix transformation data in the geometry object (really tiny scaling values). The animation track with the scaling keys were correctly written as 1,1,1. So all the objects just scaled out of view immediately.
Hard to see because the geometry has no separate scaling value but a matrix. Seems to happen when you set "Scene" to true on export.
Worked around the problem by entering the scaling value in the keyframe tracks. But this will only work if you have no scaling animation (so the keys are all one).
Meanwhile I have extensively edited the JSON by hand. Because this is not the only incorrect data. The formatting of the animation object is also wrong. The durations for the morphTargetInfluence Keys is also incorrect. The formatting of these keys is also not always correct.
Hope this helps some other ppl
I am trying to curve and round the image but I am not able to do it perfectly. I have tried to create an .amd file and set it as the background but this is not working perfectly. Is there any other way through which I can make the image round as well as curved on a Blackberry - 10.?
I am getting an image as a response from the server like below:
I want something like the following images.Images are not static they are dynamic it's comes from web service.
I have checked the links from the BlackBerry forums also but did not get a proper solution. If anyone knows then please let me know.
To put rounded corners on an image I would use the the Nine Slice feature described in the API. Using a drawing program crreate a small square example of the frame. Using the nine slice system to scale it to the size of your image and lay it over your image.
The same procedure will work for cicularly vignetting images. Depending on howmany sizes you want you may have to draw them on the fly or have several sizes and scale to other sizes.
I want to put/wrap images to 3D objects. To keep things simple and fast, instead of using(and learning) a 3D library I want to use mapping images. Mapping images are used in such a way:
So you generate the mapping images once for each object and use the same mapping for all images you want to wrap.
My question is how can I generate such mapping images (given the 3D model)? Since I don't know about the terminology my searches failed me. Sorry if I am using the wrong jargon.
Below you can see a description of the workflow.
I have the 3D model of the object and the input image, i want to generate mapping images that I can use to generate the textured image.
I don't even know where to start, any pointers are appreciated.
More info
My initial idea was to somehow wrap a identity mappings (see below) using an external program. I have generated horizontal and vertical gradient images in Photoshop just to see if mapping works using photoshop generated images. The result doesn't look good. I wasn't hopeful but it was worth a shot.
input
mappings (x and y), they just resize the image, they don't do anything fancy.
result
as you can see there are lots of artifacts. Custom mapping images I have generated by warping the gradients even looks worse.
Here is some more information on mappings: http://www.imagemagick.org/Usage/mapping/#distortion_maps
I am using OpenCV remap() function for mapping.
if i understand you right here, you want to do all of it in 2D ?
calling warpPerspective() for each of your cube surfaces will be much more successful, than using remap()
pseudocode outline:
// for each surface:
// get the desired src and dst polygon
// the src one is your texture-image, so that's:
vector<Point> p_src(4), p_dst(4);
p_src[0] = Point(0,0);
p_src[1] = Point(0,src.rows-1);
p_src[2] = Point(src.cols-1,0);
p_src[3] = Point(src.cols-1,src.rows-1);
// the dst poly is the one you want textured, a 3d->2d projection of the cube surface.
// sorry, you've got to do that on your own ;(
// let's say, you've come up with this for the cube - top:
p_dst[0] = Point(15,15);
p_dst[1] = Point(44,19);
p_dst[2] = Point(56,30);
p_dst[3] = Point(33,44);
// now you need the projection matrix to transform from one to another:
Mat proj = getPerspectiveTransform( p_src, p_dst );
// finally, you can warp your texture to the dst-polygon:
warpPerspective(src, dst, proj, dst.size());
if you can get hold of the 'Learning Opencv' book, it's described around p 170.
final word of warning, since youre complaining about artefacts, - yes, it'll all look pretty cheesy, 'real' 3d engines do a lot of work here, subpixel-uv mapping, filtering,
mipmapping, etc. if you want it to look nice, consider using the 'real' thing.
btw, there's nice opengl support built into opencv
To achieve what you are trying to do, you need to render the 3D-models UV to a texture. It will be easier to learn to render 3D than to do things this way. Especially since there are a lot of weaknesses in your aproach. difficult to to lighting and problems til the depth-buffer will be abundant.
Assuming all your objects shul ever only be viewed from one angle, you need to render each of them to 3 textures:
UV-map
Normal-map
Depth-map (to correct the depth-buffer)
You will still have to do shading in order to draw these to look like your object, and I don't even know how to do the depth-buffer-thing, I just know it can be done.
So in order to avoid learning 3D, your will have to learn all the difficult parts of 3D-rendering. Does not seem the easier route...