Creating completeness meter (Status display) in Xamarin - user-interface

I am trying to design a control that displays current status of a process, like this image below.
So, we have a circular display of status with colored sections for milestones or checkpoints. In the image, we are already through the first two stages, and third stage is 70% done.
I know there is a control in Jquery that was pretty similar. But I am not sure, if there is a third party control in Xamarin Forms that I can use. If there is no third party control, how should I proceed with the design.
Should I just create images for different stages and display the image? Or should I create a custom control which can take two values, "milestone" and "percentage_complete", and then design maybe a pie chart on the fly?

Using NGraphics w/ NControl you can create a "vector" version of your "completeness meter" without creating platform renderers or needing to add libraries like Skia to your project.
Note: SkiaSharp and other native 2d/3d libraries are great, but add an lot of overhead to an app and if you do not need all their features then the bloat (app size, memory usage, initialization time, etc...) is not worth it (IMHO).
re: https://github.com/praeclarum/NGraphics
I stripped down a MultiSegmentProgressControl that I did to show you the basics of the arc drawing. The full version that I did allows you to add and animate multiple segments, displaying percentages, break-out segments on touch, etc...
Using NControl you can create composite controls with touch elements, so it is up to you on how far you need to take it.
re: https://github.com/chrfalch/NControl
public class MultiSegmentProgressControl2 : NControlView
{
double ringWidth = 50;
double ringInnerWidth = 100;
SolidBrush redBush = new SolidBrush(Colors.Red);
RadialGradientBrush redSegmentBrush = new RadialGradientBrush(
new Point(0.5, 0.5),
new Size(.75, .75),
Colors.LightGray,
Colors.Red);
SolidBrush blueBush = new SolidBrush(Colors.Blue);
RadialGradientBrush blueSegmentBrush = new RadialGradientBrush(
new Point(0.5, 0.5),
new Size(.75, .75),
Colors.LightGray,
Colors.Green);
Tuple<double, double> _redSegment;
public Tuple<double, double> RedSegment { get { return _redSegment; } set { _redSegment = value; Invalidate(); } }
Tuple<double, double> _greenSegment;
public Tuple<double, double> GreenSegment { get { return _greenSegment; } set { _greenSegment = value; Invalidate(); } }
public override void Draw(ICanvas canvas, Rect rect)
{
canvas.FillEllipse(rect.TopLeft, rect.Size, Colors.Gray);
var n = rect;
n.X += ringWidth;
n.Y = n.X;
n.Width -= ringWidth * 2;
n.Height = n.Width;
var i = n;
canvas.FillEllipse(n.TopLeft, n.Size, Colors.LightGray);
n.X += ringInnerWidth;
n.Y = n.X;
n.Width -= ringInnerWidth * 2;
n.Height = n.Width;
canvas.FillEllipse(n.TopLeft, n.Size, Colors.White);
var r = rect.Width / 2;
DrawSegment(canvas, rect, ringWidth, redBush, r, _redSegment.Item1, _redSegment.Item2);
DrawSegment(canvas, i, ringInnerWidth, redSegmentBrush, r - ringWidth, _redSegment.Item1, _redSegment.Item2);
DrawSegment(canvas, rect, ringWidth, blueBush, r, _greenSegment.Item1, _greenSegment.Item2);
DrawSegment(canvas, i, ringInnerWidth, blueSegmentBrush, r - ringWidth, _greenSegment.Item1, _greenSegment.Item2);
}
void DrawSegment(ICanvas canvas, Rect rect, double width, Brush brush, double r, double s, double f)
{
canvas.DrawPath(new PathOp[]{
new MoveTo(SegmentEdgePoint(rect.Center, r, s)),
new ArcTo(new Size(rect.Height / 2, rect.Width / 2), false, true, SegmentEdgePoint(rect.Center, r, f)),
new LineTo(SegmentEdgePoint(rect.Center, r - width, f)),
new ArcTo(new Size(r, r), false, false, SegmentEdgePoint(rect.Center, r - width, s)),
new LineTo(SegmentEdgePoint(rect.Center, r, s)),
new ClosePath()
}, null, brush);
}
Point SegmentEdgePoint(Point c, double r, double d)
{
return new Point(
c.X + r * Math.Cos(d * Math.PI / 180),
c.Y + r * Math.Sin(d * Math.PI / 180)
);
}
}
When using NGraphics I highly recommend using the NGraphics.Editoror a Xamarin' WorkBook to interactively design your control:

If you can't find an already completed solution (highly recommended if you can find one!) then creating a control with incremental calls to a graphics library with DrawSegment calls might work.
Draw a segment for each part of the strongly coloured outer ring
sections,
Draw a segment with reduced radius for the inner, dimly coloured
sections,
Draw a white circle for the centre white pad.
Good luck!

Related

Processing | Program is lagging

I'm new to Processing and I need to make a program that, captured the main monitor, shows on the second screen the average color and makes a spiral using another color (perceptual dominant color) get by a function.
The problem is that the program is so slow (lag, 1FPS). I think it's because it has too many things to do everytime i do a screenshot, but I have no idea how to make it faster.
Also there could be many other problems, but the main one is that.
Thank you very much!
Here's the code:
import java.awt.Robot;
import java.awt.AWTException;
import java.awt.Rectangle;
import java.awt.color.ColorSpace;
PImage screenshot;
float a = 0;
int blockSize = 20;
int avg_c;
int per_c;
void setup() {
fullScreen(2); // 1920x1080
noStroke();
frame.removeNotify();
}
void draw() {
screenshot();
avg_c = extractColorFromImage(screenshot);
per_c = extractAverageColorFromImage(screenshot);
background(avg_c); // Average color
spiral();
}
void screenshot() {
try{
Robot robot_Screenshot = new Robot();
screenshot = new PImage(robot_Screenshot.createScreenCapture
(new Rectangle(0, 0, displayWidth, displayHeight)));
}
catch (AWTException e){ }
frame.setLocation(displayWidth/2, 0);
}
void spiral() {
fill (per_c);
for (int i = blockSize; i < width; i += blockSize*2)
{
ellipse(i, height/2+sin(a+i)*100, blockSize+cos(a+i)*5, blockSize+cos(a+i)*5);
a += 0.001;
}
}
color extractColorFromImage(PImage screenshot) { // Get average color
screenshot.loadPixels();
int r = 0, g = 0, b = 0;
for (int i = 0; i < screenshot.pixels.length; i++) {
color c = screenshot.pixels[i];
r += c>>16&0xFF;
g += c>>8&0xFF;
b += c&0xFF;
}
r /= screenshot.pixels.length; g /= screenshot.pixels.length; b /= screenshot.pixels.length;
return color(r, g, b);
}
color extractAverageColorFromImage(PImage screenshot) { // Get lab average color (perceptual)
float[] average = new float[3];
CIELab lab = new CIELab();
int numPixels = screenshot.pixels.length;
for (int i = 0; i < numPixels; i++) {
color rgb = screenshot.pixels[i];
float[] labValues = lab.fromRGB(new float[]{red(rgb),green(rgb),blue(rgb)});
average[0] += labValues[0];
average[1] += labValues[1];
average[2] += labValues[2];
}
average[0] /= numPixels;
average[1] /= numPixels;
average[2] /= numPixels;
float[] rgb = lab.toRGB(average);
return color(rgb[0] * 255,rgb[1] * 255, rgb[2] * 255);
}
public class CIELab extends ColorSpace {
#Override
public float[] fromCIEXYZ(float[] colorvalue) {
double l = f(colorvalue[1]);
double L = 116.0 * l - 16.0;
double a = 500.0 * (f(colorvalue[0]) - l);
double b = 200.0 * (l - f(colorvalue[2]));
return new float[] {(float) L, (float) a, (float) b};
}
#Override
public float[] fromRGB(float[] rgbvalue) {
float[] xyz = CIEXYZ.fromRGB(rgbvalue);
return fromCIEXYZ(xyz);
}
#Override
public float getMaxValue(int component) {
return 128f;
}
#Override
public float getMinValue(int component) {
return (component == 0)? 0f: -128f;
}
#Override
public String getName(int idx) {
return String.valueOf("Lab".charAt(idx));
}
#Override
public float[] toCIEXYZ(float[] colorvalue) {
double i = (colorvalue[0] + 16.0) * (1.0 / 116.0);
double X = fInv(i + colorvalue[1] * (1.0 / 500.0));
double Y = fInv(i);
double Z = fInv(i - colorvalue[2] * (1.0 / 200.0));
return new float[] {(float) X, (float) Y, (float) Z};
}
#Override
public float[] toRGB(float[] colorvalue) {
float[] xyz = toCIEXYZ(colorvalue);
return CIEXYZ.toRGB(xyz);
}
CIELab() {
super(ColorSpace.TYPE_Lab, 3);
}
private double f(double x) {
if (x > 216.0 / 24389.0) {
return Math.cbrt(x);
} else {
return (841.0 / 108.0) * x + N;
}
}
private double fInv(double x) {
if (x > 6.0 / 29.0) {
return x*x*x;
} else {
return (108.0 / 841.0) * (x - N);
}
}
private final ColorSpace CIEXYZ =
ColorSpace.getInstance(ColorSpace.CS_CIEXYZ);
private final double N = 4.0 / 29.0;
}
There's lots that can be done, even beyond what's already been mentioned.
Iteration & Threading
After taking the screenshot, immediately iterate over every 1/N pixels (perhaps every 4 or 8) of the buffered image. Then, during this iteration, calculate the LAB value for each pixel (as you have each pixel channel directly available), and meanwhile increment the running total of each RGB channel.
This saves us from iterating over the same pixels twice and avoids unncessary conversions (BufferedImage → PImage; and composing then decomposing pixel channels from PImage pixels).
Likewise, we avoid Processing's expensive resize() call (as suggested in another answer), which is not something we want to call every frame (even though it does speed the program up, it's not an efficient method).
Now, on top of iteration change, we can wrap the iteration in a Callable to easily run the workload across multiple system threads concurrently (after all, pixel iteration is embarrassingly parallel); the example below does this with 2 threads, each screenshotting and processing half of the display's pixels.
Optimise RGB→XYZ→LAB conversion
We're not so concerned about the backwards conversion since that's only done for one value per frame
It looks like you've implemented XYZ→LAB yourself and are using the RGB→XYZ converter from java.awt.color.
As has been identified, the forward conversion XYZ→LAB uses a cbrt() which is as a bottleneck. I also imagine that the RGB→XYZ implementation makes 3 calls to Math.Pow(x, 2.4) — 3 non-integer exponents per pixel adds considerably to the computation. The solution is faster math...
Jafama
Jafama is a drop-in java.math replacement -- simply import the library and replace any Math.__() calls with FastMath.__() for a free speedup (you could go even further by trading Jafama's E-15 precision with less accurate and even faster dedicated LUT-based classes).
So at the very least, swap out Math.cbrt() for FastMath.cbrt(). Then consider implementing RGB→XYZ yourself (example), again using Jafama in place of java.math.
You may even find that for such a project, converting to XYZ only is a sufficient color space to work with to overcome the well known weaknesses with RGB (and therefore save yourself from the XYZ→LAB conversion).
Cache LAB Calculation
Unless most pixels are changing every frame, then consider caching the LAB value for every pixel, recalculating it only when the pixel has changed between the current the previous frames. The tradeoff here is the overhead from checking every pixel against its previous value, versus how much calculation positive checks will save. Given that the LAB calculation is much more expensive it's very worthwhile here. The example below uses this technique.
Screen Capture
No matter how well optimised the rest of the program is, a considerable bottleneck is the AWT Robot's createScreenCapture(). It will struggles to go past 30FPS on large enough displays. I can't offer any exact advice but it's worth looking at other screen capture methods in Java.
Reworked code with iteration changes & threading
This code implements what has discussed above minus any changes to the LAB calculation.
float a = 0;
int blockSize = 20;
int avg_c;
int per_c;
java.util.concurrent.ExecutorService threadPool = java.util.concurrent.Executors.newFixedThreadPool(4);
List<java.util.concurrent.Callable<Boolean>> taskList;
float[] averageLAB;
int totalR = 0, totalG = 0, totalB = 0;
CIELab lab = new CIELab();
final int pixelStride = 8; // look at every 8th pixel
void setup() {
size(800, 800, FX2D);
noStroke();
frame.removeNotify();
taskList = new ArrayList<java.util.concurrent.Callable<Boolean>>();
Compute thread1 = new Compute(0, 0, width, height/2);
Compute thread2 = new Compute(0, height/2, width, height/2);
taskList.add(thread1);
taskList.add(thread2);
}
void draw() {
totalR = 0; // re init
totalG = 0; // re init
totalB = 0; // re init
averageLAB = new float[3]; // re init
final int numPixels = (width*height)/pixelStride;
try {
threadPool.invokeAll(taskList); // run threads now and block until completion of all
}
catch (Exception e) {
e.printStackTrace();
}
// calculate average LAB
averageLAB[0]/=numPixels;
averageLAB[1]/=numPixels;
averageLAB[2]/=numPixels;
final float[] rgb = lab.toRGB(averageLAB);
per_c = color(rgb[0] * 255, rgb[1] * 255, rgb[2] * 255);
// calculate average RGB
totalR/=numPixels;
totalG/=numPixels;
totalB/=numPixels;
avg_c = color(totalR, totalG, totalB);
background(avg_c); // Average color
spiral();
fill(255, 0, 0);
text(frameRate, 10, 20);
}
class Compute implements java.util.concurrent.Callable<Boolean> {
private final Rectangle screenRegion;
private Robot robot_Screenshot;
private final int[] previousRGB;
private float[][] previousLAB;
Compute(int x, int y, int w, int h) {
screenRegion = new Rectangle(x, y, w, h);
previousRGB = new int[w*h];
previousLAB = new float[w*h][3];
try {
robot_Screenshot = new Robot();
}
catch (AWTException e1) {
e1.printStackTrace();
}
}
#Override
public Boolean call() {
BufferedImage rawScreenshot = robot_Screenshot.createScreenCapture(screenRegion);
int[] ssPixels = new int[rawScreenshot.getWidth()*rawScreenshot.getHeight()]; // screenshot pixels
rawScreenshot.getRGB(0, 0, rawScreenshot.getWidth(), rawScreenshot.getHeight(), ssPixels, 0, rawScreenshot.getWidth()); // copy buffer to int[] array
for (int pixel = 0; pixel < ssPixels.length; pixel+=pixelStride) {
// get invididual colour channels
final int pixelColor = ssPixels[pixel];
final int R = pixelColor >> 16 & 0xFF;
final int G = pixelColor >> 8 & 0xFF;
final int B = pixelColor & 0xFF;
if (pixelColor != previousRGB[pixel]) { // if pixel has changed recalculate LAB value
float[] labValues = lab.fromRGB(new float[]{R/255f, G/255f, B/255f}); // note that I've fixed this; beforehand you were missing the /255, so it was always white.
previousLAB[pixel] = labValues;
}
averageLAB[0] += previousLAB[pixel][0];
averageLAB[1] += previousLAB[pixel][1];
averageLAB[2] += previousLAB[pixel][2];
totalR+=R;
totalG+=G;
totalB+=B;
previousRGB[pixel] = pixelColor; // cache last result
}
return true;
}
}
800x800px; pixelStride = 4; fairly static screen background
Yeesh, about 1 FPS on my machine:
To optimize code can be really hard, so instead of reading everything looking for stuff to improve, I started by testing where you were losing so much processing power. The answer was at this line:
per_c = extractAverageColorFromImage(screenshot);
The extractAverageColorFromImage method is well written, but it underestimate the amount of work it has to do. There is a quadratic relationship between the size of a screen and the number of pixels in this screen, so the bigger the screen the worst the situation. And this method is processing every pixel of the screenshot all the time, several time per screenshot.
This is a lot of work for an average color. Now, if there was a way to cut some corners... maybe a smaller screen, or a smaller screenshot... oh! there is! Let's resize the screenshot. After all, we don't need to go into such details as individual pixels for an average. In the screenshot method, add this line:
void screenshot() {
try {
Robot robot_Screenshot = new Robot();
screenshot = new PImage(robot_Screenshot.createScreenCapture(new Rectangle(0, 0, displayWidth, displayHeight)));
// ADD THE NEXT LINE
screenshot.resize(width/4, height/4);
}
catch (AWTException e) {
}
frame.setLocation(displayWidth/2, 0);
}
I divided the workload by 4, but I encourage you to tweak this number until you have the fastest satisfying result you can. This is just a proof of concept:
As you can see, resizing the screenshot and making it 4x smaller gives me 10x more speed. That's not a miracle, but it's much better, and I can't see a difference in the end result - but about that part, you'll have to use your own judgement, as you are the one who knows what your project is about. Hope it'll help!
Have fun!
Unfortunately I can't provide a detailed answer like laancelot (+1), but hopefully I can provide a few tips:
Resizing the image is definitely a good direction. Bare in mind you can also skip a number of pixels instead of incrementing every single pixel. (if you handle the pixel indices correctly, you can get a similar effect to resize without calling resize, though that won't save you a lot CPU time)
Don't create a new Robot instance multiple times a second. Create it once in setup and re-use it. (This is more of a good habit to get into)
Use a CPU profiler, such as the one in VisualVM to see what exactly is slow and aim to optimise the slowest stuff first.
point 1 example:
for (int i = 0; i < numPixels; i+= 100)
point 2 example:
Robot robot_Screenshot;
...
void setup() {
fullScreen(2); // 1920x1080
noStroke();
frame.removeNotify();
try{
robot_Screenshot = new Robot();
}catch(AWTException e){
println("error setting up screenshot Robot instance");
e.printStackTrace();
}
}
...
void screenshot() {
screenshot = new PImage(robot_Screenshot.createScreenCapture
(new Rectangle(0, 0, displayWidth, displayHeight)));
frame.setLocation(displayWidth/2, 0);
}
point 3 example:
Notice the slowest bit are actually AWT's fromRGB and Math.cbrt()
I'd suggest finding another alternative RGB -> XYZ -> L*a*b* conversion method that is simpler (mainly functions, less classes, with AWT or other dependencies) and hopefully faster.

Processing: Efficiently create uniform grid

I'm trying to create a grid of an image (in the way one would tile a background with). Here's what I've been using:
PImage bgtile;
PGraphics bg;
int tilesize = 50;
void setup() {
int t = millis();
fullScreen(P2D);
background(0);
bgtile = loadImage("bgtile.png");
int bgw = ceil( ((float) width) / tilesize) + 1;
int bgh = ceil( ((float) height) / tilesize) + 1;
bg = createGraphics(bgw*tilesize,bgh*tilesize);
bg.beginDraw();
for(int i = 0; i < bgw; i++){
for(int j = 0; j < bgh; j++){
bg.image(bgtile, i*tilesize, j*tilesize, tilesize, tilesize);
}
}
bg.endDraw();
print(millis() - t);
}
The timing code says that this takes about a quarter of a second, but by my count there's a full second once the window opens before anything shows up on screen (which should happen as soon as draw is first run). Is there a faster way to get this same effect? (I want to avoid rendering bgtile hundreds of times in the draw loop for obvious reasons)
One way could be to make use of the GPU and let OpenGL repeat a texture for you.
Processing makes it fairly easy to repeat a texture via textureWrap(REPEAT)
Instead of drawing an image you'd make your own quad shape and instead of calling vertex(x, y) for example, you'd call vertex(x, y, u, v); passing texture coordinates (more low level info on the OpenGL link above). The simple idea is x,y would control the geometry on screen and u,v would control how the texture is applied to the geometry.
Another thing you can control is textureMode() which allows you control how you specify the texture coordinates (U, V):
IMAGE mode is the default: you use pixel coordinates (based on the dimensions of the texture)
NORMAL mode uses values between 0.0 and 1.0 (also known as normalised values) where 1.0 means the maximum the texture can go (e.g. image width for U or image height for V) and you don't need to worry about knowing the texture image dimensions
Here's a basic example based on the textureMode() example above:
PImage img;
void setup() {
fullScreen(P2D);
noStroke();
img = loadImage("https://processing.org/examples/moonwalk.jpg");
// texture mode can be IMAGE (pixel dimensions) or NORMAL (0.0 to 1.0)
// normal means 1.0 is full width (for U) or height (for V) without having to know the image resolution
textureMode(NORMAL);
// this is what will make handle tiling for you
textureWrap(REPEAT);
}
void draw() {
// drag mouse on X axis to change tiling
int tileRepeats = (int)map(constrain(mouseX,0,width), 0, width, 1, 100);
// draw a textured quad
beginShape(QUAD);
// set the texture
texture(img);
// x , y , U , V
vertex(0 , 0 , 0 , 0);
vertex(width, 0 , tileRepeats, 0);
vertex(width, height, tileRepeats, tileRepeats);
vertex(0 , height, 0 , tileRepeats);
endShape();
text((int)frameRate+"fps",15,15);
}
Drag the mouse on the Y axis to control the number of repetitions.
In this simple example both vertex coordinates and texture coordinates are going clockwise (top left, top right, bottom right, bottom left order).
There are probably other ways to achieve the same result: using a PShader comes to mind.
Your approach caching the tiles in setup is ok.
Even flattening your nested loop into a single loop at best may only shave a few milliseconds off, but nothing substantial.
If you tried to cache my snippet above it would make a minimal difference.
In this particular case, because of the back and forth between Java/OpenGL (via JOGL), as far as I can tell using VisualVM, it looks like there's not a lot of room for improvement since simply swapping buffers takes so long (e.g. bg.image()):
An easy way to do this would be to use processing's built in get(); which saves a PImage of the coordinates you pass, for example: PImage pic = get(0, 0, width, height); will capture a "screenshot" of your entire window. So, you can create the image like you already are, and then take a screenshot and display that screenshot.
PImage bgtile;
PGraphics bg;
PImage screenGrab;
int tilesize = 50;
void setup() {
fullScreen(P2D);
background(0);
bgtile = loadImage("bgtile.png");
int bgw = ceil(((float) width) / tilesize) + 1;
int bgh = ceil(((float) height) / tilesize) + 1;
bg = createGraphics(bgw * tilesize, bgh * tilesize);
bg.beginDraw();
for (int i = 0; i < bgw; i++) {
for (int j = 0; j < bgh; j++) {
bg.image(bgtile, i * tilesize, j * tilesize, tilesize, tilesize);
}
}
bg.endDraw();
screenGrab = get(0, 0, width, height);
}
void draw() {
image(screenGrab, 0, 0);
}
This will still take a little bit to generate the image, but once it does, there is no need to use the for loops again unless you change the tilesize.
#George Profenza's answer looks more efficient than my solution, but mine may take a little less modification to the code you already have.

Coloring 3d model Vuforia Unity

I want to implement a feature in which color of a particular area will be picked by 3d model. I am using vuforia and unity3d and successfully implemented the target detection. In next step I want to pick color of image and put that color on 3d Model.
Many people have already implemented this but I am not able to find a complete tutorial of that.
I have tired to use region Cature as well but no success.
I would take the area of the screen you are after, then place it in a Pixel array and average that array.
public Color GetColorFromScreen(int x, int y, int width, int height){
Texture2D tex = new Texture2D(1, 1);
tex.ReadPixels(new Rect(x, y, width, height), 0, 0);
tex.Apply();
Color [] pix = tex.GetPixels(x, y, width, height);
float r,g,b,a;
foreach (Color col in pix){
r += col.r;
g += col.g;
b += col.b;
a += col.a;
}
r /= pix.Length;
g /= pix.Length;
b /= pix.Length;
a /= pix.Length;
return new Color(r,g,b,a);
}
Then you grab the material of your model and apply that color
GetComponent<Renderer>().material.color = GetColorFromScreen(x,y,w,h);

direct2d image viewer How to convert screen coordinates to image coordinates?

I'm trying to figure out how to convert the mouse position (screen coordinates) to the corresponding point on the underlying transformed image drawn on a direct2d surface.
the code here should be considered pseudo code as i'm using a modified c++/CLI wrapper around direct2d for c#, you won't be able to compile this in anything but my own project.
Render()
{
//The transform matrix combines a rotation, followed by a scaling then a translation
renderTarget.Transform = _rotate * _scale * _translate;
RectF imageBounds = new RectF(0, 0, _imageSize.Width, _imageSize.Height);
renderTarget.DrawBitmap(this._image, imageBounds, 1, BitmapInterpolationMode.Linear);
}
Zoom(float zoomfactor, PointF mousepos)
{
//mousePos is in screen coordinates. I need to convert it to image coordinates.
Matrix3x2 t = _translate.Invert();
Matrix3x2 s = _scale.Invert();
Matrix3x2 r = _rotate.Invert();
PointF center = (t * s * r).TransformPoint(mousePos);
_scale = Matrix3x2.Scale(zoomfactor, zoomfactor, center);
}
This is incorrect, the scale center starts moving around wildly when the zoomfactor increases or decreases smoothly, the resulting zoom function is not smooth and flickers a lot even though the mouse pointer is immobile on the center of the client surface. I tried all the combinations I could think of but could not figure it out.
If I set the scale center point as (imagewidth/2, imageheight/2), the resulting zoom is smooth but is always centered on the image center, so I'm pretty sure the flicker isn't due to some other buggy part of the program.
Thanks.
I finally got it right
this gives me perfectly smooth (incremental?, relative?) zooming centered on the client center
(I abandoned the mouse position idea since I wanted to use mouse movement input to drive the zoom)
protected float zoomf
{
get
{
//extract scale factor from scale matrix
return (float)Math.Sqrt((double)((_scale.M11 * _scale.M11)
+ (_scale.M21 * _scale.M21)));
}
}
public void Zoom(float factor)
{
factor = Math.Min(zoomf, 1) * 0.006f * factor;
factor += 1;
Matrix3x2 t = _translation;
t.Invert();
PointF center = t.TransformPoint(_clientCenter);
Matrix3x2 m = Matrix3x2.Scale(new SizeF(factor, factor), center);
_scale = _scale * m;
Invalidate();
}
Step1: Put android:scaleType="matrix" in ImageView XML file
Step 2: Convert screen touch points to Matrix value.
Step 3: Divide each matrix value with Screen density parameter to
get same coordinate value in all screens.
**XML**
<ImageView
android:id="#+id/myImage"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:scaleType="matrix"
android:src="#drawable/ga"/>
**JAVA**
#Override
public boolean onTouchEvent(MotionEvent event) {
float[] point = new float[]{event.getX(), event.getY()};
Matrix inverse = new Matrix();
getImageMatrix().invert(inverse);
inverse.mapPoints(point);
float density = getResources().getDisplayMetrics().density;
int[] imagePointArray = new int[2];
imagePointArray[0] = (int) (point[0] / density);
imagePointArray[1] = (int) (point[1] / density);
Rect rect = new Rect( imagePointArray[0] - 20, imagePointArray[1] - 20, imagePointArray[0] + 20, imagePointArray[1] + 20);//20 is the offset value near to the touch point
boolean b = rect.contains(267, 40);//267,40 are the predefine image coordiantes
Log.e("Touch inside ", b + "");
return true;
}

Zoom to fit algorithm

I'm trying to build a "zoom to fit" algorithm in Lua (Codea). Imagine a shape anywhere on Canvas. I would like to automatically zoom on the center of this shape so that it occupies most part of the Canvas and be centred on it. Finally, I would like to be able to zoom back out to the initial situation, so matrices should do the job. Is There a simple way to do this ? Any code, even not in Lua, is welcome.
In C#,
double aspectRatio = shape.Width / shape.Height;
if (aspectRatio > 1)
{
// Width defines the layout
double origShapeWidth = shape.Width;
shape.Width = panel.Width;
shape.Height = panel.Width * shape.Height / origShapeWidth;
// Center the shape
double margin = (panel.Height - shape.Height) / 2;
shape.Margin = new Thickness(0, margin, 0, margin);
}
else
{
// Height defines the layout
double origShapeHeight = shape.Height;
shape.Height = panel.Height;
shape.Width = panel.Height * shape.Width / origShapeHeight;
// Center the shape
double margin = (panel.Width - shape.Width) / 2;
shape.Margin = new Thickness(margin, 0, margin, 0);
}

Resources