When to use PresentationParameters.BackBufferWidth vs .Viewport.Width - xna-4.0

Had to shorten the calls to make the question more readable but...
When is correct or incorrect to use on or the other ?
I guess in most cases is the same as you just have the one Viewport but if going split screen I guess you ll have more

Usually you want the viewport size, as this is the region within which rendering actually takes place.
If you ever add anything like split-screen or picture-in-picture rendering, then you must use the viewport. So you may as well use it to begin with.
You should use the backbuffer size only when that is what you actually want. For example, you want the backbuffer when taking screenshots, or setting viewport positions.
I've got a more detailed answer to a very similar question over on the game dev site.

Related

how to use texture masks in game Maker?

First off I'm not totally sure if "texture masks" is the correct term to use here so If someone knows what it is then please let me know.
so the real question. I want to have an object in GameMaker: Studio which as it moves around it's texture changes depending on its position by pulling from a larger static image behind it. I've made a quick gif of what it might look like.
It can be found here
Another image that might help explain this is the "source-in" section of this image.
This is a reply to the same question posted on the steam GML forum by MrDave:
The feature you are looking for is draw_set_blend_mode(bm_subtract)
Basically you will have to draw everything onto a surface and then using the code above you switch the draw mode to bm_subtract. What this will do is rather than drawing images to the screen it will remove them. So you now draw blocks over the background and this will remove that area. Then you can draw everything you just put on the surface onto the screen.
(Remember to reset the draw mode and the surface target after. )
Its hard to get your head around the first time, but actually it isn’t all that complex once you get used to it.

Adjusting hard values in processing for any screen size

So I'm making a game with my group on processing for a project and we all have different computers. The problem is we built the game on one computer, however at this point we have realized the the (1200,800) size we used does not work on our professors computer. Unfortunately we have hard coded thousands of values to fit on this resolution. Is there any way to make it fit on all computers?
From my own research I found you can use screen.width and screen.height in order to get the size of the screen, I set the game window to about half the screen size. However all the images I had loaded for background and stuff are 1200x800 So I am unsure how to go about modifying ALL of my pictures (backgrounds), and hard values.
Is there anyway to fix this without having to go manually change the 1000's of hard values? (Yes I am fully aware how bad it is I hard coded the numbers).
Any help would be greatly appreciated. As mentioned in title, the language is processing.
As I'm sure you have learned your lesson about hard-coding numbers, I won't say anything about it :)
You may have heard of embedding a processing PApplet inside a traditional java JFrame or similar. If you are okay with scaling the image that your PApplet draws (ie it draws it at the resolution that you've coded, and then the resulting image is scaled up or down to match the screen), then you could embed your papplet in a frame, capture the papplet's output to an image, scale the image, then draw it to the screen. A quick googling yielded this SO question. It may make your game look funny if the resolutions are too different, but this is a quick and dirty way. It's possible that you'll want to have this done in a separate thread, as suggested here.
Having said that, I do not recommend it. One of the best thing (IMO) of Processing is not having to mess directly with AWT/Swing. It's also a messy kludge and the "right thing to do" is just to go back and change the hard-coded numbers to variables. For your images, you can use PImage's resize(). You say your code is several hundred lines long, but in reality that isn't a huge amount-- the best thing to do is just to suck it up and be unhappy for a few hours. Good luck!

Enlarging image without affecting clarity

I need to enlarge the image downloaded without affecting its clarity.but when resized its clarity has gone.Can any one help?
Given the context, by clarity I assume you mean visual appearance. You want your upscaled image, again I believe you are dealing with upscaling and not downscaling (it is not specified in your problem), to look visually good. We actually can magically create detail, but probably not a perfect one. There are techniques for specifically working with pixelated images, hqx or http://research.microsoft.com/en-us/um/people/kopf/pixelart/paper/pixel.pdf for instance. Since that is not clear from your description either, I'm simply assuming you have images of any kind.
With these considerations, you have yet to describe what you tried. Let me guess you tried a nearest neighbor interpolation, so you get something like:
There are other common types of interpolation. Like bicubic, Lanczos or something more modern like ICBI or http://www.cs.huji.ac.il/~raananf/projects/lss_upscale/paper.pdf. Consider the first three of those, we get the respective results:
It may be a little hard to visualize the differences among these last three, but if you zoom into the actual images then you will be able to notice them. ICBI gives sharpest edges in this case.
Image resizing will always affect clarity, unless you downloaded a vector graphics image. See if the image has a vector graphics format, and if so, download that.
Failing that, you could try to see if larger image sizes are available, as generally shrinking hurts the image quality less than increasing.

svg out of screen, is rendered?

Scenario: I have SVG image that I can zoom-in and zoom-out. Depending on the zoom, I will display more/less details on the visible part.
The question is: should I take care of not displaying details on the parts that are not currently visible (out of the screen), or the rendering engine is smart enough to skip (clip) those parts before they are rendered?
Yes, browsers are usually clever enough to not render things outside the viewport area.
Note however that the browser still needs to traverse the entire document tree, so even things outside the viewport area can have an impact. It's usually enough to mark the non-interesting subtrees with display="none" to let the browser skip over them when traversing. On small documents that's usually not something that you need to worry about.
I guess clipping will always be applied to the current viewport. But you are probably changing the DOM by updating with the detail visibility changes and restricting that to the visible parts only can make a difference.
The easiest way to find this out is to measure, though. Make two prototypes, one with manual clipping, one without and look for differences in rendering speed in various renderers.

Make Firefox image scaling down similar to the results in Chrome or IE

On the left is the original PNG and on the right are versions reduced to roughly half the original size using width and height.
Why does the resized image look so fuzzy in Firefox? Is there anything I can do about it without changing the image file? The fuzziness is particular annoying if the image contains large amounts of math or text.
I know this is late, but you can trick firefox into rendering the image better by applying a oh-so-slight rotation. I tried to translate() the image to get the same effect... to no avail.
CSS
.image-scale-hack {
transform: rotate( .0001deg );
}
Javascript
if( "MozAppearance" in document.documentElement.style ) {
$('.logo img').addClass('image-scale-hack');
}
I avoid browser sniffs at all cost. I borrowed this sniff from yepnope.js and I don't feel bad about it.
Also noteworthy, this same trick can be used to force sub-pixel image rendering in both webkit and firefox. This is useful for very slow animations - best explained by example:
http://jsfiddle.net/ryanwheale/xkxwN/
There is a longstanding bug ticket filed in Bugzilla related to Firefox image downscaling. You might like to keep an eye on the ticket to track its eventual resolution or contribute a patch yourself if you feel able to.
The best workaround is to use the transform CSS property to apply a tiny rotation to the problem image and force sub-pixel rendering, as detailed in Ryan Wheale's answer.
The image-rendering documentation linked from the Firefox blurs an image when scaled through css or inline style answer which Su' referenced includes instructions for using image-rendering:optimizeQuality (which corrected the issue in my testing on FF4) - example:
I think your answer is in the link from above https://developer.mozilla.org/En/CSS/Image-rendering:
'Currently auto and optimizeQuality are equal by default, both result in bilinear resampling.'
'default value IE8+: bicubic (high quality)'
Next see:
http://www.codinghorror.com/blog/2007/07/better-image-resizing.html
'When making an image smaller, use bicubic, which has a natural sharpening effect. You want to emphasize the data that remains in the new, smaller image after discarding all that extra detail from the original image.'
I can think of a couple of possible workarounds, but neither are simple:
Resize the image on the server. Either serve it up at half size, and allow Firefox to scale it up to full (which presumably it will be ok at), or have different URLs for the different sizes of image.
You may be able to make this work in the browser with plugins (but the example I found doesn't actually do what you need, so I've removed it).
TL;DR: Image scaling is not likely to be fixed soon. About anywhere.
Longer version:
Eris Brasseur has a page that deals nicely with the broader question "Why is just about any image scaling software so bad?"
http://www.ericbrasseur.org/gamma.html
Since W3C's position on this matter is roughly that it's better to have an incorrect but equally incorrect implementation everywhere, they shun any proper dealing with Gamma (which would complicate matters slightly). Thus anyone accustomed to web standards is likely to continue ignoring Gamma, leading to the effects described by Eric and in this thread. This ensures that even downscaling is far from being well-defined, as Jeff Atwood puts it in an Article linked in another answer.
In such an environment, methods like Lanczos thrive whose claim to fame is mostly that they perform quite well even if implemented incorrectly.
In other words, browsers are the software equivalent of McDonald's burgers, and that fact will stay. Its implications need not, but the odds are skewed.
Now (2017) the bug is closed 2 years ago. A short Test:
FF, 50%:
FF, 25%:
A workaround for this issue is just to resize the original image with an image editor to the desired size and to use the image as it is, without defining it's width and height in the style sheet.

Resources