Unity windowed mode size is different on different screen resolutions - windows

So I want the window size in the build to be of a certain size and it works great when displayed on 1920 x 1080 screen resolution, anything more or less than that, and the window becomes too big or too small. Is there any way for the window to be of the same window to screen size resolution?
I have used the following settings:
My build settings

Afaik you can set the resolution depending on the Display screen size using Screen.currentResolution and Screen.SetResolution somewhat like e.g.
public class ScreenSizeController : MonoBehaviour
{
// how much space (percentage) of the screen should your window fill
[Range(0f,1f)]
public float fillX;
[Range(0f,1f)]
public float fillY;
private void Awake()
{
// Get actual display resolution
var res = Screen.currentResolution;
// calculate target resolution using the fill
var targetX = fillX * res.width;
var targetY = fillY * res.height;
// Set player resolution
Screen.SetResolution(targetX, targetY, false);
}
}
Note: Typed on smartphone but I hope the idea gets clear

Wouldn't changing the screen width/height in the resolution and presentation menu to 1920 x 1080 fix it

Related

macOS, how resize window across screens?

I'm trying to programmatically resize macOS windows. Similar to Rectangle.
I have the basic resizing code working, for example, move the window to the right half, and when there is only one screen it works fine, however when I try to resize with two screens (in a vertical layout) the math does not work:
public func moveRight() {
guard let frontmostWindowElement = AccessibilityElement.frontmostWindow()
else {
NSSound.beep()
return
}
let screens = screenDetector.detectScreens(using: frontmostWindowElement)
guard let usableScreens = screens else {
NSSound.beep()
print("Unable to obtain usable screens")
return
}
let screenFrame = usableScreens.currentScreen.adjustedVisibleFrame
print("Visible frame of current screen \(usableScreens.visibleFrameOfCurrentScreen)")
let halfPosition = CGPoint(x: screenFrame.origin.x + screenFrame.width / 2, y: -screenFrame.origin.y)
let halfSize = CGSize(width: screenFrame.width / 2, height: screenFrame.height)
frontmostWindowElement.set(size: halfSize)
frontmostWindowElement.set(position: halfPosition)
frontmostWindowElement.set(size: halfSize)
print("movedWindowRect \(frontmostWindowElement.rectOfElement())")
}
If my window is on the main screen then the resizing works correctly, however if it is a screen below (#3 in the diagram below) then the Y coordinate ends up in the top monitor (#2 or #1 depending on x coordinate) instead of the original one.
The output of the code:
Visible frame of current screen (679.0, -800.0, 1280.0, 775.0)
Raw Frame (679.0, -800.0, 1280.0, 800.0)
movedWindowRect (1319.0, 25.0, 640.0, 775.0)
As far as I can see the problem lies in how Screens and windows are positioned:
I'm trying to understand how should I position the window so that it remains in the correct screen (#3), but having no luck so far, there doesn't seem to be any method to get the absolute screen dimensions to place the screen in the correct origin.
Any idea how can this be solved?
I figured it out, I completely missed one of the functions used in the AccessibilityElement class:
static func normalizeCoordinatesOf(_ rect: CGRect) -> CGRect {
var normalizedRect = rect
let frameOfScreenWithMenuBar = NSScreen.screens[0].frame as CGRect
normalizedRect.origin.y = frameOfScreenWithMenuBar.height - rect.maxY
return normalizedRect
}
Basically, since everything is calculated based on the main screen then there is no other option than to take the coordinates of that one and then offset to get the real position of the screen element.

GraphicsView fitInView() very pixelated result when downshrinking

I have searched everywhere and i cannot find any solution after 2 days of trying.
The Problem:
I'm doing an image Viewer with "Fit Image to View" feature. I load a picture of say 3000+ pixels in my GraphicsView (which is a lot smaller ofcourse), scrollbars appear that's good. When i click my btnFitView and executed:
ui->graphicsView->fitInView(scene->sceneRect(),Qt::KeepAspectRatio);
This is down scaling right? After fitInView() all lines are pixelated. It looks like a saw went over the lines on the image.
For example: image of a car has lines, image of a texbook (letters become in very bad quality).
My code sample:
// select file, load image in view
QString strFilePath = QFileDialog::getOpenFileName(
this,
tr("Open File"),
"/home",
tr("Images (*.png *.jpg)"));
imageObject = new QImage();
imageObject->load(strFilePath);
image = QPixmap::fromImage(*imageObject);
scene = new QGraphicsScene(this);
scene->addPixmap(image);
scene->setSceneRect(image.rect());
ui->graphicsView->setScene(scene);
// on_btnFitView_Clicked() :
ui->graphicsView->fitInView(scene->sceneRect(),Qt::KeepAspectRatio);
Just before fitInView(), sizes are:
qDebug()<<"sceneRect = "<< scene->sceneRect();
qDebug()<<"viewRect = " << ui->graphicsView->rect();
sceneRect = QRectF(0,0 1000x750)
viewRect = QRect(0,0 733x415)
If it is necessary i can upload screenshots of original loaded image and fitted in view ?
Am i doing this right? It seems all examples on the Web work with fitInView for auto-fitting. Should i use some other operations on the pixmap perhaps?
SOLUTION
// LOAD IMAGE
bool ImgViewer::loadImage(const QString &strImagePath)
{
m_image = new QImage(strImagePath);
if(m_image->isNull()){
return false;
}
clearView();
m_pixmap = QPixmap::fromImage(*m_image);
m_pixmapItem = m_scene->addPixmap(m_pixmap);
m_scene->setSceneRect(m_pixmap.rect());
this->centerOn(m_pixmapItem);
// preserve fitView if active
if(m_IsFitInView)
fitView();
return true;
}
// TOGGLED FUNCTIONS
void ImgViewer::fitView()
{
if(m_image->isNull())
return;
this->resetTransform();
QPixmap px = m_pixmap; // use local pixmap (not original) otherwise image is blurred after scaling the same image multiple times
px = px.scaled(QSize(this->width(),this->height()),Qt::KeepAspectRatio,Qt::SmoothTransformation);
m_pixmapItem->setPixmap(px);
m_scene->setSceneRect(px.rect());
}
void ImgViewer::originalSize()
{
if(m_image->isNull())
return;
this->resetTransform();
m_pixmap = m_pixmap.scaled(QSize(m_image.width(),m_image.height()),Qt::KeepAspectRatio,Qt::SmoothTransformation);
m_pixmapItem->setPixmap(m_pixmap);
m_scene->setSceneRect(m_pixmap.rect());
this->centerOn(m_pixmapItem); //ensure item is centered in the view.
}
On downshrink this produces good quality. Here are some stats after calling these 2 functions:
// "originalSize()" : IMAGE SIZE = (1152, 2048)
// "originalSize()" : PIXMAP SIZE = (1152, 2048)
// "originalSize()" : VIEW SIZE = (698, 499)
// "originalSize()" : SCENE SIZE = (1152, 2048)
// "fitView()" : IMAGE SIZE = (1152, 2048)
// "fitView()" : PIXMAP SIZE = (1152, 2048)
// "fitView()" : VIEW SIZE = (698, 499)
// "fitView()" : SCENE SIZE = (280, 499)
There is a problem now, after call to fitView() look the size of scene? Much smaller.
And if fitView() is activated, and I now scale the image on wheelEvent (zoomIn/zoomOut), with the views scale function: scale(factor,factor); ..produces terrible result.
This doesn't happen with originalSize() where scene size is equal to image size.
Think of the view as a window into the scene.
Moving the view large amounts, either zooming in or out, will likely create images that don't look great. Rather than the image being scaled as you would expect, the view is just moving away from the scene and doing its best to render the image, but the image has not been scaled, just transformed in the scene.
Rather than using QGraphicsView::fitInView, keep the main image in memory and create a scaled version of the image with QPixamp::scaled, each time FitInView is selected, or the user zooms in / out. Then set this QPixmap on the QGraphicsPixmapItem with setPixmap.
You may also want to think about dropping the scroll bars and allowing the user to drag the image around the screen, which provides a better user interface, in my opinion; though of-course it depends on your requirements.

How to improve display quality in pdf.js

I'm using open source library for PDF documents from mozilla(pdf.JS).
When i'm trying to open pdf documents with bad quality, viewer displays it with VERY BAD quality.
But if I open it in reader, or in browser (drag/drop into new window), whis document displays well
Is it possible to change?
Here is this library on github mozilla pdf.js
You just have to change the scaling of your pdf i.e. when rendering a page:
pdfDoc.getPage(num).then(function(page) {
var viewport = page.getViewport(scale);
canvas.height = viewport.height;
canvas.width = viewport.width;
...
It is the scale value you have to change. Then, the resulting rendered image will fit into the canvas given its dimensions e.g. in CSS. What this means is that you produce a bigger image, fit it into the container you had before and so you effectively improve the resolution.
There is renderPage function in web/viewer.js and print resolution is hard-coded in there as 150 DPI.
function renderPage(activeServiceOnEntry, pdfDocument, pageNumber, size) {
var scratchCanvas = activeService.scratchCanvas;
var PRINT_RESOLUTION = 150;
var PRINT_UNITS = PRINT_RESOLUTION / 72.0;
To change print resolution to 300 DPI, simply change the line below.
var PRINT_RESOLUTION = 300;
See How to increase print quality of PDF file with PDF.js viewer for more details.
Maybe it's an issue related with pixel ratio, it used to happen to me when device pixel ratio is bigger than 1 (for example iPhone, iPad, etc.. you can read this question for a better explanation.
Just try that file on PDF.js Viewer. If it works like expected, you must check how PDF.js works with pixel ratio > 1 here. What library basically does is:
canvas.width = viewport.width * window.devicePixelRatio;
canvas.styles.width = viewport.width + 'px'; // Note: The px unit is required here
But you must check how PDF.js works for better perfomance
I ran into the same issue and I used the intent option of renderContent to fix that.
const renderContext = {
intent: 'print',
// ....
}
var renderTask = page.render(renderContext);
As per docs renderContext accepts intent which supports three values - display, print or any. The default is display. When I used print instead the render quality was extremely good, at par with any desktop app.

TextView with long text invisible with LAYER_TYPE_HARDWARE or LAYER_TYPE_SOFTWARE

I'm having a problem rendering a long TextView in a hardware accelerated activity (android:hardwareAccelerated="true"). The textview has no background color (i.e. it is transparent). When the text is longer than a certain length, the textview renders with a solid black background instead of a transparent background.
The text in the TextView can be edited by the user, and is being forced to not wrap except at actual newlines. I'm doing this by calculating the width of the text like so:
int textWidth = 0;
String[] lines = string.split("\\n");
for (String line : lines) {
int lineWidth = (int) tv.getPaint().measureText(line);
if (lineWidth > textWidth) {
textWidth = lineWidth;
}
}
int width = m.getPaddingLeft() + tv.getPaddingLeft() + textWidth
+ tv.getPaddingRight() + m.getPaddingRight();
Then I Override the onMeasure method of the ViewGroup to force the width to be at least as long as the text:
#Override
protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec) {
super.onMeasure(widthMeasureSpec, heightMeasureSpec);
int newWidth = Math.max(getMeasuredWidth(), width);
setMeasuredDimension(newWidth, getMeasuredHeight());
}
All of this is working as expected, but it allows the text to get really big - too big apparently.
Attempted Solutions:
I guessed that the problem was with OpenGL being unable to render something that long, so I queried the GL_MAX_TEXTURE_SIZE OpenGL parameter and compared it to the width. Sure enough, the problem occurs when width > GL_MAX_TEXTURE_SIZE.
To solve this problem, I wrote some code to disable hardware acceleration on the view when the text is too long:
int[] maxGlTexSize = new int[1];
GLES20.glGetIntegerv(GLES20.GL_MAX_TEXTURE_SIZE, maxGlTexSize, 0);
if (width > maxGlTexSize[0]) {
Log.e("Debug", "Too big for GL");
tv.setLayerType(View.LAYER_TYPE_SOFTWARE, null);
} else {
Log.i("Debug", "Small enough for GL");
tv.setLayerType(View.LAYER_TYPE_NONE, null);
}
However, this code doesn't work for me. When the condition is met (the text is too long), the textview becomes invisible. This also happens if I try to use LAYER_TYPE_HARDWARE. (I tried that because the Hardware Acceleration guide says to set the layer type to HARDWARE for large views with alpha.)
Also, I did try permanently setting the view layer type. The results were slightly different for the two types:
LAYER_TYPE_SOFTWARE: When the activity is created with text smaller than the limit, it renders fine. When text is added to surpass the limit, the view disappears. When the text is shortened to be within the limit again, it reappears.
LAYER_TYPE_HARDWARE: Identical to LAYER_TYPE_SOFTWARE except that the text does not reappear when shortened after being too long. The activity must be recreated in order for the text to reappear.
TL;DR
I'm having a view rendering problem caused by OpenGL limitations, but view.setLayerType(View.LAYER_TYPE_SOFTWARE, null); is making the view disappear rather than fixing the problem.
After thinking about this problem for a bit, I realized that it makes sense that changing the layer type of the view doesn't solve the problem. If the activity is hardware accelerated, the view still has to be stored in a GPU texture to be rendered to the screen, regardless of whether or not the view is hardware accelerated.
To solve the problem, I simply lowered the resolution (size) of the text until the view's width was less than the GL_MAX_TEXTURE_SIZE. This works well because the text doesn't need to be high resolution if the user is displaying a lot of it, because they will scale it down to fit all of the text on the screen.

Photoshop Action to fill image to make a certain ratio

I am looking to make a photoshop action (maybe this isn't possible, any other application recommendations would be helpful as well). I want to take a collection of photos and make them a certain aspect ration, ex: 4:3.
So I have an image that is 150px wide by 200px high. What I would like to happen is the image's canvas is made to be 267px wide, with the new area filled with a certain color.
So there are two possibilities I can think of:
1) Photoshop actions could do this, but I would have to pull current height, multiply by 1.333333 and then put that value in the width box of the canvas resize. Is it possible to have calculated values in Photoshop actions?
2) Some other application has this feature built in.
Any help is greatly appreciated.
Wow, I see now (after writing the answer) that this was asked a long time ago. . . oh well. This script does the trick.
This Photoshop script will resize any image's canvas so that it has a 4:5 aspect ratio. You can change the aspect ratio applied by changing arWidth and arHeight. The fill color will be set to the current background color. You could create an action to open a file, apply this script, then close the file to do a batch process.
Shutdown Photoshop.
Copy this javascript into a new file named "Resize Canvas.jsx" in Photoshop's Presets\Scripts folder.
Start Photoshop and in the File - Scripts menu it should appear.
#target photoshop
main ();
function main ()
{
if (app.documents.length < 1)
{
alert ("No document open to resize.");
return;
}
// These can be changed to create images with different aspect ratios.
var arHeight = 4;
var arWidth = 5;
// Apply the resize to Photoshop's active (selected) document.
var doc = app.activeDocument;
// Get the image size in pixels.
var pixelWidth = new UnitValue (doc.width, doc.width.type);
var pixelHeight = new UnitValue (doc.height, doc.height.type);
pixelWidth.convert ('px');
pixelHeight.convert ('px');
// Determine the target aspect ratio and the current aspect ratio of the image.
var targetAr = arWidth / arHeight;
var sourceAr = pixelWidth / pixelHeight;
// Start by setting the current dimensions.
var resizedWidth = pixelWidth;
var resizedHeight = pixelHeight;
// The source image aspect ratio determines which dimension, if any, needs to be changed.
if (sourceAr < targetAr)
resizedWidth = (arWidth * pixelHeight) / arHeight;
else
resizedHeight = (arHeight * pixelWidth) / arWidth;
// Apply the change to the image.
doc.resizeCanvas (resizedWidth, resizedHeight, AnchorPosition.MIDDLECENTER);
}
Mind that the accepted answer from #user268911 may not work for you if the source image has different pixels/inch than 72. Because the UnitValue.convert function works correctly only with 72 px/inch. To be sure the conversion is correct for ever pixel/inch value, set baseUnit property as follows:
...
var pixelWidth = new UnitValue (doc.width, doc.width.type);
pixelWidth.baseUnit = UnitValue (doc.width.baseUnit, "in");
var pixelHeight = new UnitValue (doc.height, doc.height.type);
pixelHeight.baseUnit = UnitValue (doc.height.baseUnit, "in");
...
For more details about the conversion see "Converting pixel and percentage values" section of the Adobe JavaScript Tools Guide.
What languages do you know? ImageMagick has command line tools that can do this, but you'd need to know a scripting language to get the values and calculate the new ones.
For .NET, my company's product, DotImage Photo, is free and can do this (need to know C# or VB.NET)

Resources