image pixel by pixel manipulation in fabric.js - pixel

Like : The getImageData() method returns an ImageData object that copies the pixel data for the specified rectangle on a canvas.
Is it possible to get similar functionality using any Fabric.js method ?

It seems the way to do this is get the context from the fabric canvas
e.g.
var myFabricCanvas = new fabric.Canvas("id");
myFabricCanvas.getContext().getImageData(0, 0, inputImage.width, inputImage.height);
have a look at filters in http://fabricjs.com/fabric-intro-part-2#image_filters as well

Related

How do I Crop Images in Flutter?

I searching an days for this question.
I want to Crop Images like that, in flutter:
GIF Source: https://github.com/ArthurHub/Android-Image-Cropper
The closest lib for this solution is the Image Lib, that lib offers manipulate images and crop, but i want to crop images in UI level like this gif. All libs I found dont offers that.
There is no widget that performs all that for you. However, I believe that it is possible to write that natively in flutter now. I don't have time at this particular moment to do it for you, but I can definitely point you in the right direction.
You're going to need to load the image in such a way that you can either draw it onto a canvas or use a RawImage to draw it rather than using the Image widget directly.
You need to figure out a co-ordinate system relative to the image
You'll need to find a way of drawing the crop indicator - you could do this either by drawing directly on the canvas or possibly using some combination of GestureDetector/Draggable/DropTarget. I'd suggest that sticking to Canvas might be the easiest to start.
Once the user has selected a part of the image, you need to translate the screen co-ordinates to picture co-ordinates.
You then have to create an off-screen canvas to draw the cropped image to. There are various transforms you'll have to do to makes sure the image ends up in the right place.
Once you've made the off-screen crop, you'll have to display the new image.
All of that is quite a lot of work, and probably a lot of finessing to get right.
Here's examples for a couple of the steps you'll need to do, but you'll have to figure out how to put them together.
Loading an image:
var byteData = await rootBundle.load("assets/image.jpg");
Uint8List lst = new Uint8List.view(byteData.buffer);
var codec = await UI.instantiateImageCodec(lst);
var nextFrame = await codec.getNextFrame();
var image = frameInfo.image;
Displaying an image on a canvas:
https://docs.flutter.io/flutter/dart-ui/Canvas/drawImageRect.html
https://docs.flutter.io/flutter/rendering/CustomPainter-class.html
Writing an image to a off-screen canvas:
ui.Image getCroppedImage(Image image, Rect src, Rect dst) {
var pictureRecorder = new ui.PictureRecorder();
Canvas canvas = new Canvas(pictureRecorder);
canvas.drawImageRect(image, src, dst, Paint());
return pictureRecorder.endRecording().toImage(dst.width.floor(), dst.height.floor());
}
You'll probably need to do something like this answer for getting the local coordinates of mouse/touch gestures.
Some advice - I'd start as simple as possible, not thinking about performance to start (i.e. draw everything each paint if needed, etc). Then once you get the basics working you can start thinking of optimization (i.e. using a RawImage, Transform, and Stack for the image and only re-drawing the selector, etc).
If you need any additional help let me know in a comment and I'll do my best to answer. Now that I've been writing about this a bit it does make me slightly curious to try implementing it so I may try at some point, but it probably won't be soon as I'm quite low on time at the moment. Good luck =D
The image_cropper plugin does exactly what you are looking for.

Depth component readRenderTargetPixels in Three.js?

Can depth pixel numbers be extracted from THREE.WebGLRenderer, similar to the .readRenderTargetPixels functionality? Basically, is there an update to this question. My starting point is Three.js r80. Normalized values are fine if I can also convert to distances.
Related methods:
I see that WebGL's gl.readPixels does not support gl.DEPTH_COMPONENT like OpenGL's .glReadPixels does.
THREE.WebGLRenderTarget does support a .depthTexture via THREE.WebGLRenderer's WEBGL_depth_texture extension. Although THREE.DepthTexture does not contain .image.data like THREE.DataTexture does.
I also see that THREE.WebGLShadowMap uses .renderBufferDirect with a THREE.MeshDepthMaterial.
Data types:
A non-rendered canvas, can use .getContext('2d') with .getImageData(x,y,w,h).data for the topToBottom pixels as a Uint8ClampedArray.
For a rendered canvas, render() uses getContext('webgl') and contexts may only be queried once, so getImageData cannot be used.
Instead render to a target and use .readRenderTargetPixels(...myArrToCopyInto...) to access (copy out) the bottomToTop pixels in your Uint8Array.
Any canvas can use .toDataURL("image/png") to return a String in the pattern "data:image/png;base64,theBase64PixelData".
You can't directly get the content of the FrameBuffer's depth attachment using readPixels. Whether it's a RenderBuffer or a (Depth) Texture.
You have to write depth data in the color attachment.
You can render your scene using MeshDepthMaterial, like shadow mapping technic. You ends up with the depth RGBA encoded in the color attachment. You can get it using readPixels (still RGBA encoded). It mean you have to render your scene twice, one for the depth and one to display the scene on screen.
If the depth you want match what you show on screen (same camera/point of view) you can use WEBGL_depth_texture to render depth and display in one single render loop. It can be faster if your scene contains lots of objects/materials.
Finally, if your hardware support OES_texture_float, you should be able to draw depth data to a LUMINANCE/FLOAT texture instead of RGBA. This way you can directly get floating point depth data and skip a costly decoding process in js.

Creating BezierPath from Texture in SpriteKit/Swift

So, basically, I have a sprite (.png) in SpriteKit, where most of the pixels are zero/no detail etc.
I wish to create a UIBezierPath of only the detail inside the Sprite, so later I can use containsPoint.
After a bunch of googling I found you can create a PhysicsBody using the path of the Texture in the sprite image, like this:
let texture = SKTexture(imageNamed: "myImage.png")
img.physicsBody = SKPhysicsBody(texture: texture, size: img.size)
This creates a physics body around only the detail inside the image, excluding any alpha value pixels, if you know what I mean.
Is there a way to create a BezierPath doing the same? or maybe creating a CGPath, then using that to create a BezierPath?
The reason is, I'm hoping to test if a CGPoint is inside this sprite, without using the physicsDelegate or physicsBodys.
Thanks in advance.

RMagick: Rotate image with origin (matrix transform)

tl;dr: How can I use RMagick to rotate an image at a given point.
I created a website which allows user to manipulate 2 images online and composite these images on the server side to a single image.
I use regular css transform: rotate() for rotation on the client side.
On the server side I'm rotating the image using RMagick's rotate! method but the results differ from the web version.
(Presumably because of an origin issue (e.g. at which point of the image the rotation takes place)).
The web version rotates at the center of the image (transform-origin: 50% 50%). Unfortunately RMagick doesn't by default.
I read through the RMagick docs and found affine_transform which accepts a matrix and transforms the image. Is this the right method to use, if so how? I tried passing the css-matrix to that function but it doesn't work.
Somewhere in the RMagick documentation I read that Magick::Image#rotate accepts 3 parameters (degree, originX, originY) but my version says that it only accepts 2 parameters (and actually requires the second parameter to be a string...).
My code:
require 'rmagick'
include Magick
#label = Label.last
image = #label.image
json = #label.processing
image.background_color = "none"
image.resize!(json["size"]["width"], json["size"]["height"])
# how can I set the rotation origin to the center of the image?
image.rotate!(json["rotation"].to_i)
overlay.composite!(image, json["position"]["x"], json["position"]["y"],
Magick::OverCompositeOp)
overlay.write("output5.png")
The output I'm currently getting is this. The blue square is actually imagein the code. The heart is overlay.
My desired output looks like this: (Ignore background, border and controls)
If I don't use rotation at all, both images are identically. That's why I assume it's an rotation issue. Both images are equally in width and height.
edit: Apparently only Magick::RVG::Image accepts the originX & originY parameters I mentioned above. Still not able to transform the current image into a RVG Image. It might solve the issue if I can transform my Magick::Image into Magick::RVG::Image.
Okay I've found a solution for this problem. I have to use RVG which is a module to create vector graphics.
The #rotate(degree, originX, originY) method is defined in the RVG::Image class so I had to wrap my Magick::Image object with:
require 'rvg/rvg'
image = RVG::Image.new(magick_image, width, height, x, y)
canvas = RVG.new(width, height)
canvas.use image
overlay.composite!(canvas.draw, ...)
Writing this on mobile, I will add a detailed answer asap.

Monotouch: Create image reflection?

I would like to make a reflection to an image - like this:
Is that possible in Monotouch?
Thanks!
Mojo
It is possible, considering that you have full access to CoreGraphics.
There are too many ways of skinning that cat though.
Say you have the top image in "image", I would do something like:
Create a graphics context
Draw the image
Create a bitmap context for the inverted image, with alpha transparency
Render the image invertd
Render a gradient that has been configured to go from 0.5 opaque to 0.2 opaque
Render that on the bottom of the image
Get an image out of the second context
Draw the extracted image into the first context, inverted.

Resources