I'm trying to increase the distance from which AR.js can detect a marker, using AFrame AR.js.
As I understand, the sourceWidth and sourceHeight determine which resolution is requested from the webcam, while canvasWidth and canvasHeight set the size for the canvas that ARController uses to process each frame.
The default width and height of both are set to 640 x 480, but changing either one requires changing the other as well, or the aspect ratio will not be correct anymore. However, different devices will have different aspect ratios especially for higher resolutions, and the rendering will get distorted.
The problem probably lies in having to define the canvas dimensions before having the webcam source dimensions. Is there a way to have the canvas dimensions set based on the initialized ARSource?
Or is there a different way to simply raise the resolution of the processing without worrying about supported resolutions and aspect ratios?
Related
I'm currently trying to make my simple game scale with the resolution. I've noticed though when I change the resolution not everything works out. For instance from the shift from 1280x720 to 1920x1080 the jumping distance changes slightly. The main problem I've noticed is that when I fire a projectile with a velocity. On lower resolutions it seems to travel across the screen significantly faster and I can't understand why as it should scale down with the size of the window. Here is a snipet of the code that fires a projectile:
m = new Box(l.pos.x+Width/32*direction2, l.pos.y-Height/288, Width/64, Height/72, true, 4);
m.body.setGravityScale(0f);
boxes.add(m);
m.body.setLinearVelocity(new Vec2(Width*direction2, 0));
In this scenario m is a box I'm creating. In new Box(spawn x coordinate, spawn y cooridinate, width of box, height of box, is the box moveable, type of box) l.pos.x and l.pos.y are the positions I'm firing the box from. The Height and Width variables are the size of the current window in pixels being updated in void draw(), direction2 is either 1 or -1 depending on the direction in which the character is facing.
Hard to tell how the rest of code affects the simulation without seeing more of it.
Ideally you would want to keep phyics related properties independent from the Processing sketch dimensions in terms of dimensions but maintain proportion so you can simply scale up the rendering of the same sized world. If you have mouse interaction the coordinates would scale as well, but other than the position, the rest of physical proeprties should be maintained.
From what I can gather in your code if Width is the sketch's width you should de-couple that from linear velocity:
m.body.setLinearVelocity(new Vec2(Width*direction2, 0));
you should use a value that you will keep constant in relation to the sketch dimensions.
I'm trying to scale sprites to have size defined in px. regardless of camera FOV and so on. I have sizeAttenuation set to false, as I dont want them to be scaled based on distance from camera, but I struggle with setting the scale. Dont really know the conversion formula and when I hardcoded the scale with some number that's ok on one device, on the other its wrong. Any advice or help how to have the sprites with the correct sizing accross multiple devices? Thanks
Corrected answer:
Sprite size is measured in world units. Converting world units to pixel units may take a lot of calculations because it varies based on your camera's FOV, distance from camera, window height, pixel density, and so on...
To use pixel-based units, I recommend switching from THREE.Sprite to THREE.Points. It's material THREE.PointsMaterial has a size property that's measured in pixels if sizeAttenuation is set to false. Just keep in mind that it has a max size limitation based on the device's hardware, defined by gl.ALIASED_POINT_SIZE_RANGE.
My original answer continues below:
However, "1 px" is a subjective measurement nowadays because if you use renderer.setPixelRatio(window.devicePixelRatio); then you'll get different sprite sizes on different devices. For instance, MacBooks have a pixel ratio of 2 and above, some cell phones have pixel ratio of 3, and desktop monitors are usually at a ratio of 1. This can be avoided by not using setPixelRatio, or if you use it, you'll have to use a multiplication:
const s = 5;
points.size = s * window.devicePixelRatio;
Another thing to keep in mind is that sprites THREE.Points are sized in pixels, whereas meshes are sized in world units. So sometimes when you shrink your browser window vertically, the sprite Point size will remain the same, but the meshes will scale down to fit in the viewport. This means that a 5px sprite Point will take up more real-estate in a small window than it would in a large monitor. If this is the problem, make sure you use the window.innerHeight value when calculating sprite Point size.
I have a viewer with a perspective camera. I know the size of the viewer and the pixel ratio. I have several sprites in my scene that use the .sizeAttenuation property to never change size.
With all of this, I want to be able to set the scale of the sprite instances to, for example, be 20px x 20px. Is that possible? Is there a known conversion from pixels to sprite scale?
What I am experiencing now is that the sprites will change size depending on the viewer size. I wish to know how to resize them when the viewer changes so they are consistently the same size.
thanks!
As modern macOS devices choose to use a scaled HiDPI resolution by default, bitmap images get blurred on screen. Is there a way to render a bitmap pixel by pixel to the true native physical pixels of the display screen? Any CoreGraphics, OpenGL, or metal API that would allow this without change the display mode of the screen?
If you are thinking of those convertXXXXToBacking and friends, stop. Here is the explanation for you. A typical 13 in MacBook pro now has native 2560x1600 pixel resolution. The default recommended screen resolution is 1440x900 after fresh macOS install. The user can change it to 1680x1050 via System Preferences. In either 1440x900 or 1680x1050 case, the backingScaleFactor is exactly 2. The typical rendering route would render anything first to the unphysical 2880x1800 or 3360x2100 resolution and the OS/GPU did the final resampling with an unknown method.
Your best bet is probably Metal. MTKView's drawableSize claims that the default is the view's size "in native pixels". I'm not certain if that means device pixels or backing store pixels. If the latter, you could turn off autoResizeDrawable and set drawableSize directly.
To obtain the display's physical pixel count, you can use my answer here.
You can also try using Core Graphics with a CGImage of that size drawn to a rect of the screen size in backing store pixels. It's possible that all of the scaling will be internally cancelled out.
If these are resource images, use asset catalogs to provide x2 and x3 resolution version of your images, icons, etc. Classes like NSImageView will automatically select the best version for the display resolution.
If you just have some random image and you want it draw at the resolution of the display, get the backingScaleFactor of the view's window. If it's 2.0 then drawing a 200 x 200 pixel image into a 100 x 100 coordinate point rectangle will draw it at 1:1 native resolution. Similarly, if the scale factor is 3.0, a 300 x 300 pixel image will draw into a 100 x 100 coordinate point rectangle.
Not surprisingly, Apple has an extensive guide on this subject: High Resolution Guidelines for OS X
How does the TextSize property on an SKPaint object relate to the 'standard' Xamarin Forms FontSize?
In the image you can see the difference between size 40 on a label and as painted. What would I need to do to make them the same size?
As #hankide mentioned, it has to do with the fact that the native OS has scaling for UI elements so the app "looks the same size" on different devices.
This is great for buttons and all that as the OS is drawing them. So if the button is bigger, the OS just scales up the text. However, with SkiaSharp, we have no idea what you are drawing so we can't do any scaling. If we were to scale, the image would become blurry or pixelated on the high resolution screens.
One way to get everything the same size is to do a global scale before drawing anything:
var scale = canvasWidth / viewWidth;
canvas.Scale(scale);
And this is often good enough, but sometimes you really want to draw items differently on a high resolution screen. An example would be a tiled background. Instead of stretching the image on a bigger canvas, you may want to just tile it - preserving the pixels.
In the case of this question, you can either scale the entire canvas before drawing, or you can just scale the text:
var paint = new SKPaint {
TextSize = 40 * scale
};
This way, the text size is increased, but the rest of the drawing is on a larger canvas.
I have an example on GitHub: https://github.com/mattleibow/SkiaSharpXamarinFormsDemo
This compares Xamarin.Forms, SkiaSharp and Native labels. (They should all be exactly the same size)
I think that the problem is in the way Xamarin.Forms handles font sizes. For example on Android, you could define the font size in pixels (px), scale-independent pixels (sp), inches (in), millimeters and density-independent pixels (dp/dip).
I can't remember how Xamarin.Forms handles the sizes (px,sp or dp) but the difference you see here is because of that. What you could do, is create an Effect that changes the font size handling on the native control and try to match the sizing provided by SkiaSharp.