Positioned Widgets on SingleChildScrollView - Differences between smartphones - Flutter - scroll

I'm developing a small game with a scrollable map and levels.
I have a scrollable map in a SingleChildScrollView.
I want to put my Positioned Widgets (my levels) on my map.
I used this formula to find map resolution: width_map/height_map.
I used this formula to find left distance: MediaQuery_width * resolution_map * custom_width_position.
I used this formula to find height distance: MediaQuery_height * resolution_map * custom_height_position.
Positioned(
left: width * sizeMap * level['width'],
top: height * sizeMap * level['height'],
child: ...
)
This formula with different smartphones has different result => the levels are not on the identical place on map (they are near but places not corresponding).
Any ideas?
Thanks in advance.

Related

Mix two non-opaque colors with "hue" blend mode

I want to implement color blending as described in the W3C compositing and blending spec. (I'm doing this in JavaScript but the language shouldn't really matter for solving my problem.)
In retrospect: During the implementation of the answer to this question I realized that this would probably make for a pretty nice standalone package. In case you're interested you can grab it from npm.
It worked out pretty well so far but I wanted to take these algorithms a step further and add support for alpha channels. Thanks to the SVG compositing spec providing all the needed formulas that wasn't too hard.
But now I'm stuck with implementing the blend modes that the W3C spec describes as non-separable which are (as known from Photoshop): hue, saturation, color and luminosity.
Sadly, algorithms for those aren't available in the SVG spec and I have no idea how to work with those. I guess there are a modified versions of the formulas provided by the W3C for working with alpha channels which I'm missing.
To make my problem a little more visual I'll show what Photoshop gives me for hue blending two colors:
This is what I'm also able to reproduce with the non-alpha algorithm from the mentioned W3C spec.
What I can't reproduce is the result that Photoshop gives me when I put a lower alpha on both the source and the backdrop color:
Does anyone know how to achieve that result programmatically?
Update 1: Changed illustrations (adding HSVA and RGBA codes) to clarify the used colors.
Update 2: To check possible solutions I'll attach two other Photoshop-generated blending examples:
Update 3: So it turned out that in addition to not having a clue about color blending I also messed up my Photoshop settings, making the task to solve my question even harder. Fixed the example images for possible future passerbies.
The Hue alpha you have at your second image does not represent the alpha color composition formula, but it rather reflects the Porter Duff alpha composition Source Over as defined here 9.1.4. Source Over and it uses the following formula:
If you want to achieve that kind of blending, which is not proper Hue blending, you can use the following formula in javascript:
PDso = { // Ported Duff Source Over
r: ((S.r * S.a) + (B.r * B.a) * (1 - S.a)) / aR,
g: ((S.g * S.a) + (B.g * B.a) * (1 - S.a)) / aR,
b: ((S.b * S.a) + (B.b * B.a) * (1 - S.a)) / aR,
};
// where
// S : the source rgba
// B : the backdrop rgba
// aR : the union alpha (as + ab * (1 - as))
Hue Blending Mode with Alpha Channel
Here is a screenshot of the exact hue blend source over backdrop using the alpha color composition formula that I have created in Photoshop:
The middle square with the green highlighted letters is the correct blend representation. Here is the CSS Hue mix blend with the source color inside the backdrop color, using the new CSS mix-blend-mode (run the code snippet):
.blends div {
width:140px;
height:140px;
}
.source {
mix-blend-mode: hue;
}
.backdrop.alpha {
background-color: rgba(141, 214, 214, .6);
isolation: isolate;
}
.source.alpha {
background-color: rgba(255, 213, 0, .6);
}
<div id="main">
<div class="blends alpha">
<div class="backdrop alpha">
<div class="source alpha"></div>
</div>
</div>
</div>
If you use a color picker, you'll get almost the same values (211, 214, 140 <> 210, 214, 140). That can be due to slightly different algorithms, or different rounding methods, but it doesn't really matter. The fact is, that this is the correct result when blending alpha colors with hue blend mode.
So, now we need the formula to have the proper color values for the alpha color composition applied to our hue blend mode. I have searched a little bit and I found everything inside the Adobe Document management - Portable document format - Part 1: PDF 1.7. We can find the color composition formula at the page 328 after the Blend Modes:
11.3.6 Interpretation of Alpha
The colour compositing formula
This is the formula I managed to get the proper and closer to Photoshop match for the Hue Blending Mode with alpha channel. I wrote it like this in javascript:
function Union(ab, as) {
return as + ab * (1 - as);
}
function colourCompositingFormula(as, ab, ar, Cs, Cb, Bbs) {
return (1 - (as / ar)) * Cb + (as / ar) * Math.floor((1 - ab) * Cs + ab * Bbs);
}
var aR = Union(B.a, S.a); // αr = Union(αb, αs) // Adobe PDF Format Part 1 - page 331
var Ca = {
// Adobe PDF Format Part 1 - page 328
r: colourCompositingFormula(S.a, B.a, aR, S.r, B.r, C.r),
g: colourCompositingFormula(S.a, B.a, aR, S.g, B.g, C.g),
b: colourCompositingFormula(S.a, B.a, aR, S.b, B.b, C.b)
};
// where
// C : the hue blend mode result rgb
// S : the source rgba
// B : the backdrop rgba
// aR : the union alpha (as + ab * (1 - as))
// Ca : the final result
body {
padding:0;
margin:0;
}
iframe {
width: 100%;
height: 200px;
border:0;
padding:0;
margin:0;
}
<iframe src="https://zikro.gr/dbg/html/blending-modes/"></iframe>
My test example can be found here. At the 2.5 With Alpha (Hue Blending Algorithm Computed), you cay see the final hue blend mode result with alpha. It has slightly different values than Photoshop result but I got the exact same result (202, 205, 118) in Fireworks, hue blending the source and backdrop colors:
All applications have their own slightly different algorithms and maybe the formula I have used is rather old and maybe there is a newest version.
Starting from here
Hue blending creates a color with the hue of the source color and the saturation and luminosity of the backdrop color.
I can come up with some formulas, but they might be rubbish, although they completely reproduce the original numbers posted:
h: hSource + deltaH * (1 - aSrouce) * aBackdrop * 0.41666666 = 50; 63
s: sBackdrop * 0.9 + deltaS * (1 - aBackdrop) * aSource * 0.20833333 = 45; 47.5
l: lBackdrop * 0.957142857 + deltaL * (1 - aBackdrop) * aSource * 0.77 = 67; 63.3
a: 1 - (1 - aSource)^2 matches always

Three.js determine camera distance based on object3D size

I'm trying to determine how far away the camera needs to be from my object3D which is a collection of meshes in order for the entire model to be framed in the viewport.
I get the object3D size like this:
public getObjectSize ( target: THREE.Object3D ): Size {
let box: THREE.Box3 = new THREE.Box3().setFromObject(target);
let size: Size = {
depth: (-1 * box.min.z) + box.max.z,
height: (-1 * box.min.y) + box.max.y,
width: (-1 * box.min.x) + box.max.x
};
return size;
}
Next I use trig in an attempt to determine how far back the camera needs to be based on that box size in order for the entire box to be visible.
private determinCameraDistance(): number {
let cameraDistance: number;
let halfFOVInRadians: number = this.geometryService.getRadians(this.FOV / 2);
let height: number = this.productModelSizeService.getObjectSize(this.viewService.primaryView.scene).height;
let width: number = this.productModelSizeService.getObjectSize(this.viewService.primaryView.scene).width;
cameraDistance = ((width / 2) / Math.tan(halfHorizontalFOVInRadians));
return cameraDistance;
}
The math all works out on paper and the length of the adjacent side of the triangle (the camera distance) can be verified using a^2 + b^2 = c^2. However for some reason the distance returned is 10.4204 while the camera distance I need to show the entire object3D is actually 95 (determined by hard coding the value) which results in only being able to see a tiny portion of my model.
Any ideas on what I might be doing wrong, or better way to determine this. It seems to me like there is some kind of unit conversion that I'm missing when going from the box sizing units to camera distance units
Actual numbers used in the calculation:
FOV = 110 degrees,
Object3D size: {
Depth: 11.6224,
Height: 18.4,
Width: 29.7638
}
So we take half the field of view to create a right triangle with the adjacent side placed along our camera distance, that's 55 degrees. We then use the formula Degrees * PI / 180 to convert 55 degrees into the radian equivalent, which is .9599. Next we take half the object3D width, again to create a right triangle, which is 14.8819. We can now take our half width and divide it by the tangent of the FOV (in radians), this gives us the length for the adjacent side / camera distance of 10.4204.
We can further verify this is the correct length of this side I'll get the length of the hypotenuse using SOHCAHTOA again:
Sin(55) = 14.8819 / y
.8192 * y = 14.8819
y = 14.8819 / .8192
y = 18.1664
Now using this we can use the pythagorean theorem solve for b to check our math.
14.8819^2 + b^2 = 18.1664^2
221.4709 + b^2 = 330.0018
b^2 = 108.5835
b = 10.4203 (we're off by .0001 but that's due to rounding)
The issue ended up being that in THREE.js field of view represents the vertical viewing area. I had been assuming that THREE like Maya and other applications uses Field of View as the horizontal viewing area.
Multiplying the FOV that I was getting by the Aspect Ratio gives me the correct horizontal field of view, which results in a Camera distance of ~92.

Swift: how to show a zoomed area of an UIImageView into another UIImageView

I know i have written many posts about this up to now but this could solve my problem about precision. I have seen that many app show a zoomed area of an UIImageView in the neighborhood of a draggable object like the following screenshot (Jobnote):
I would like to know which is the fastest and simplest way to get something similar. I guees the circular view is a UIImageView but I don't know hot to get the zoom inside it. Any help would be kind of yours. Thank you
UPDATE:
Is it possible to get this by using MagnifierContainerView and MagnifyingGlassView described here?
I've done something not too dissimilar popping up a 1:1 preview of a high resolution image which you can read about here.
In a nutshell, I have a peekPreviewSize which is the length of the sides of my square zoomed-in preview. I can then define an offset from my touch location:
let offset = ((peekPreviewSize * imageScale) / (imageWidth * imageScale)) / 2
Next, I calculate the distance between the edge of the component and the edge of the image it contains:
let leftBorder = (bounds.width - (imageWidth * imageScale)) / 2
Then, with the location of the touch point and these two new values, I can create the normalised x origin of the clip rectangle:
let normalisedXPosition = ((location.x - leftBorder) / (imageWidth * imageScale)) - offset
I do the same for y and with those two normalised values create a preview point:
let topBorder = (bounds.height - (imageHeight * imageScale)) / 2
let normalisedYPosition = ((location.y - topBorder) / (imageHeight * imageScale)) - offset
let normalisedPreviewPoint = CGPoint(x: normalisedXPosition, y: normalisedYPosition)
...which is passed to my ForceZoomPreview:
let peek = ForceZoomPreview(normalisedPreviewPoint: normalisedPreviewPoint, image: image!)
My previewing component now has very little work to do. It's passed the normalized origin in its constructor (above), so all it needs to do is use those values to set the contentsRect of an image view:
imageView.layer.contentsRect = CGRect(
x: max(min(normalisedPreviewPoint.x, 1), 0),
y: max(min(normalisedPreviewPoint.y, 1), 0),
width: view.frame.width / image.size.width,
height: view.frame.height / image.size.height)

algorithm which will find a best fitting picture

I have a list of images, each one of them has width and height.
and I have one div - with width and height
now I have to find out which image would look the best in my div - would be the least distorted.
What i'm trying now is checking the aspect ratios and width - but then i need some kind of weight - how important should be each one.
Is there a better way to do it? Any ready to use algorithms?
edit: about the weight - lets say I have a div of size 100 x 50, and 2 images: 2000 x 1000, 101 x 51. The ratio for first one is perfect - but i would have to scale it down 20 times, so it would be easier for the browser and proabably for the viewer experience use the second image. So I use
a = abs((img.aspect_ratio - div.aspect_ratio) / (img.aspect_ratio + div.aspect_ratio))
b = abs((img.width - div.width) / (img.width + div.width))
// division to scale the value between (0, 1)
and then look for image with smallest a + b. To get better effect tried to use a+2b instead - these are the weights.

object dimensions in image according to camera distance faraway

I have a camera placed 10 meters faraway from a portrait (rectangle) having width = 50cm and height = 15cm, I want to get the dimensions of this portrait inside the image captured. The image captured has width=800 px and height=600 px.
How can I calculate the dimensions of the portrait inside the image? Any help please?
I am assuming the camera is located along the center normal of the portrait, looking straight at the portrait's center.
Let's define some variables.
Horizontal field of view: FOV (you need to specify)
Physical portrait width: PW = 50 cm
Width of portrait plane captured: CW cm (unknown)
Image width: IW = 800 px
Width of portrait in image space: X px (unknown)
Distance from camera to subject: D = 10 m
We know tan(FOV) = (CW cm) / (100 * D cm). Therefore CW = tan(FOV) * 100 * D cm.
We know PW / CW = X / IW. Therefore X = (IW * PW) / (tan(FOV) * 100 * D) px.
I agree with Timothy's answer, in that you need to the know the camera's field of view (FOV). I'm not sure I totally follow/agree with his method however. I think this is similar, but it differs, the FOV needs to be divided by two to split our view into two right-angled triangles. Use tan(x)=opposite/adjacent
tan(FOV/2) = (IW/2) / (Dist * 100)
where IW is the true image width (must divide by two as we are only finding finding half of the width with the right-angled triangle), Dist is the distance from the camera to the portrait (converted to cm).
Rearrange that to find the Width of the entire image (IW):
IW = tand(FOV/2) * (2*Dist*100)
You can now work out the width of each pixel (PW) using the number of pixels in the image width (800 for you).
PW = IW / NumPixels
PW = IW / 800
Now divide the true width by this value to find the number of pixels.
PixelWidth = TrueWidth / PW
The same can be done for the height, but you need your camera's field of view.
Im not sure this is the same a Timothy's answer, but I'm pretty sure this is correct.

Resources