I'm trying to scale a bunch of tiled images in Safari. Safari on iOS displays this correctly. However, on the Mac, there is some weird flickering visible between the images while they are scaling. Once the animation stops, transparent lines are also visible between the tiles.
I've made a Pen showing the behaviour: http://cdpn.io/gdcxD
I've read a bunch of stuff suggesting that I use -webkit-backface-visibility: hidden or some other esoteric webkit directive. I've tried a couple of those, but they do not make any difference.
Ultimately, this will be used in a iBooks Author widget, so it needs to work both on iOS and on the Mac.
I do not really care how this gets solved except for one thing: I need to keep the -webkit-transform: translateZ(0) on the images so they display correctly on iOS.
Any ideas ?
Edit: The iBooks Author Widget I'm making will allow the user to zoom in on the tiled image. The images I'm using are large (1500 x 1200 px), but I'm scaling them down on the page to show the whole tiled image. I need to preserve their quality so the user can zoom in on them.
This is likely due to the browser using native system graphic processing such as anti-aliasing when scaling images. On the Mac it seems as images are not anti-aliased correctly at the edges which will at some sizes leave transparent (anti-aliased) pixels at the edges.
One way to solve this is to use a canvas element instead which you draw all your tiles to, then transform the canvas element instead.
var canvas = document.createElement('canvas'),
ctx = canvas.getContext('2d'),
tileW = 100,
tileH = 100,
cols = 10,
rows = 10,
x, y;
canvas.width = tileW * cols;
canvas.height = tileH * rows;
for(y = 0; y < rows; y++) {
for(x = 0; x < cols; y++) {
/// use spritesheet of image array here... just for example
ctx.drawImage(spriteSheet, srcX, srcY, tileW, tileH, // use position for x/y in spritesheet here instead (or 0,0 if image array)
x * tileW, y * tileH, tileW, tileH);
}
}
parentElement.appendChild(canavas);
Now apply the CSS to the canvas element.
Hope this helps.
Try using position:relative; on the img element.
Related
In Xiaomi devices, there are drawn an image outside of camera's letterbox
In other devices everything is correct
I attached both sumsung and xiaomi images, the screenshot that looks ugly is xiaomi, and good look in samsung
float targetaspect = 750f / 1334f;
// determine the game window's current aspect ratio
float windowaspect = (float)Screen.width / (float)Screen.height;
// current viewport height should be scaled by this amount
float scaleheight = windowaspect / targetaspect;
// obtain camera component so we can modify its viewport
Camera camera = GetComponent<Camera>();
// if scaled height is less than current height, add letterbox
if (scaleheight < 1.0f)
{
Rect rect = camera.rect;
rect.width = 1.0f;
rect.height = scaleheight;
rect.x = 0;
rect.y = (1.0f - scaleheight) / 2.0f;
camera.rect = rect;
}
try setting the image to clamp instead of repeat.
this will give the result of black borders but you won't have that weird texture
I don't know what caused that problem, however i solved it in a tricky way. I just added second camera to display black background. Only My main camera's viewport is letterboxed, but not second camera. So it made display to look good
I'm trying to make my UI elements work and remain the same for every different resolution. I added a Canvas Scaler to my Canvas and played around with the settings until it looked finished.
I then tried building the game and running it at few different resolutions to confirm that it was working. However, the Canvas Scaler doesn't seems to work.
http://prntscr.com/d1afz6
Above is some random resolution but that's how big my editor screen is and that's what I'm using as my reference resolution. That's also the hierarchy for this specific Canvas http://prntscr.com/d1aggx. It takes almost the whole screen when ran at 640x480. I have no clue why this is not working. I've read most of the unity guides on that but none of them seem to have that problem.
Ok, to fit something no matter the size of the screen, you have to use a separate coordinate system than Unity's absolute system. One of Unity's models is the View. The View is coordinates 0,0 at the top left, and 1,1 at the bottom right. Creating a basic Rect that handles that, is the following.
using UnityEngine;
namespace SeaRisen.nGUI
{
public class RectAnchored
{
public float x, y, width, height;
public RectAnchored(float x, float y, float width, float height)
{
this.x = x;
this.y = y;
this.width = width;
this.height = height;
}
public static implicit operator Rect(RectAnchored r)
{
return new Rect
{
x = r.x * Screen.width,
y = r.y * Screen.height,
width = r.width * Screen.width,
height = r.height * Screen.height
};
}
}
}
Here, we take the normal Rect floats, the x,y coordinates along with a width and height. But these are in the values [0..1]. I don't clamp it, so it can be tweened on and off the screen with animation, if desired.
The following is a simple script that create's a button in the lower right corner of the screen, and resizes as the screen grows or shrinks.
void MoveMe()
{
RaycastHit hit;
if (Physics.Raycast(transform.position, -Vector3.up, out hit, float.MaxValue)
|| Physics.Raycast(transform.position, Vector3.up, out hit, float.MaxValue))
transform.position = hit.point + Vector3.up * 2;
}
void OnGUI()
{
if (GUI.Button(new RectAnchored(.9f, .9f, .1f, .1f), "Fix me"))
{
MoveMe();
}
}
The X is .9 to the right and Y .9 from the top, and width and height of .1, so the button is 1/10th of the screen in height and width, positioned in the bottom 1/10th of the screen.
Since OnGUI is rendered every frame (or so), the button rect updates with the screen resize automatically. The same would work in a typical UI, if you are using Update() to render your windows.
I hope this explains the difference between what I meant with absolute coordinates. Setting say the previous example (using absolutes) in 640x480, it'd be something like new Rect(576, 432, 64, 48) and it wouldn't scale. By using new RectAnchored(.9f, .9f, .1f, .1f) and have it rendered into UI space based on Screen size, then it scales automatically.
I love the way Sublime text shows it's product demo on it's home page:
http://www.sublimetext.com/
How can I create a similar demo? All I note is that it is a Canvas element.
Sorry if it's sounds as a basic question. I see it's made on Canvas. Any leads or help in this regard is highly appreciate?
They are using delays and parts of images such as this one (look at the bottom part of the image):
and specify what (rectangular) part of each image renders when, making it look like an animation.
It's a typical texture atlas.
This is the list of the images:
"anim/rename2_packed.png",
"anim/days_169_packed.png",
"anim/command_palette_packed.png",
"anim/goto_anything_packed.png",
"anim/goto_anything2_packed.png",
"anim/regex_packed.png"
And this is how they specify the delay and the image pieces
{"delay":1811,"blit":[[0,0,800,450,0,0]]},
{"delay":48,"blit":[[0,450,400,344,200,36],[66,982,63,15,0,36]]},
{"delay":798,"blit":[]}, etc...
As you see, delay is the time in milliseconds, and blit looks like parameters for drawImage - srcX, srcY, width, height, destX, destY.
Each of the "screens" is kept as a separate variable, like command_palette_timeline, days_169_timeline, goto_anything_timeline, etc. Each containing delay/blit array of objects like the one from the paragraph above.
The actual render code is pretty straightforward, they follow each step in each timeline, with delays between them, and each step is rendered like this:
for (j = 0; j < blits.length; ++j)
{
var blit = blits[j]
var sx = blit[0]
var sy = blit[1]
var w = blit[2]
var h = blit[3]
var dx = blit[4]
var dy = blit[5]
ctx.drawImage(img, sx, sy, w, h, dx, dy, w, h)
}
This is for the programing language Processing (2.0).
Say I wish to load a not square image (lets use a green circle for the example). If I load this on a black background you can visibly see the white square of the image(aka all parts of image that aren't the green circle). How would I go about efficiently removing them?
It can not think of an efficient way to do it, I will be doing it to hundreds of pictures about 25 times a second(since they will be moving).
Any help would be greatly appreciated, the more efficient the code the better.
As #user3342987 said, you can loop through the image's pixels to see if each pixel is white or not. However, it's worth noting that 255 is white (not 0, which is black). You also shouldn't hardcode the replacement color, as they suggested -- what if the image is moving over a striped background? The best approach is to change all the white pixels into transparent pixels using the image's alpha channel. Also, since you mentioned you would be doing it "about 25 times a second", you shouldn't be doing these checks more than once-- it will be the same every time and would be wasteful. Instead, do it when the images are first loaded, something like this (untested):
PImage[] images;
void setup(){
size(400,400);
images = new PImage[10];
for(int i = 0; i < images.length; i++){
// example filenames
PImage img = loadImage("img" + i + ".jpg");
img.beginDraw();
img.loadPixels();
for(int p = 0; p < img.pixels.length; p++){
//color(255,255,255) is white
if(img.pixels[p] == color(255,255,255)){
img.pixels[p] = color(0,0); // set it to transparent (first number is meaningless)
}
}
img.updatePixels();
img.endDraw();
images[i] = img;
}
}
void draw(){
//draw the images as normal, the white pixels are now transparent
}
So, this will lead to no lag during draw() because you edited out the white pixels in setup(). Whatever you're drawing the images on top of will show through.
It's also worth mentioning that some image filetypes have an alpha channel built in (e.g., the PNG format), so you could also change the white pixels to transparent in some image editor and use those edited files for your sketch. Then your sketch wouldn't have to edit them every time it starts up.
Pixels are stored in the Pixels[] array, you can use a for loop to check to see if the value is 0 (aka white). If it is white load it as the black background.
I'm attempting to do some raw pixel manipulation, but I'm seeing some very inconsistent results on my android device when setting the alpha channel. A simple example of the kinds of things I'm seeing:
<canvas id='canvas' width='128' height='128'></canvas>
var ctx = $("#canvas")[0].getContext('2d');
var image = ctx.getImageData(0, 0, 128, 128);
var idx = 0;
for (var y=0; y < 128; ++y) {
for (var x=0; x < 128; ++x) {
image.data[idx+0] = 128;
image.data[idx+1] = 128;
image.data[idx+2] = 128;
image.data[idx+3] = (x+y);
idx += 4;
}
}
ctx.putImageData(image, 0, 0);
Code also available here: http://jsfiddle.net/U3rwY/
The intent of the above code is to have a solid gray square with alphas that increase from 0 to 255, so we should see a square that fades from the background color to gray at the bottom corner. And this is exactly what we see on any modern desktop browser:
On android though we see this:
Which looks like it is expecting a pre computed color instead of the 128,128,128 I'm giving it. Is that correct, and if so, is there a reliable way of detecting which browsers are expecting which set of values?
It is possible that your problem comes from a bug in the default browser of android, when it draws a pixel that contains an alpha value different from 0 or 255 it alters its color. You are not the only one experiencing this issue: https://code.google.com/p/android/issues/detail?id=17565
I guess the only chance to get it solved is to report the bug. Also, it seems that the bug was partly fixed in android 4.1 (while 4.0 still has it).
yeah on android browser there is problem with this . safari 6.0 and opera mobile works fine for that but not opera mini and firefox mobile . http://www.html5rocks.com/en/tutorials/canvas/hidpi/
here you can learn why .