I've have found a problem running unity(ver. 2019.4.34f1 and 2021.2.8f1) on my MacBook m1 chip. Whenever I run Playmode while having Scene and Game tab open at the same time I get only around 30fps which is very strange running on m1 chip. But When I closed the Scene tab and ONLY having the Game tab open, the fps jump to 400fps+.
I have:
installed mono, .Net Arm64
Default Unity Configuration
Is there a way to resolve this issue?
I suspect this has to do with a default setting on a LapTop to save power. Try changing the interaction mode under Edit/Preferences/General.
Interaction Mode Specifies how long the Editor can idle before it
updates. After one frame finishes, the Editor waits up to the
specified amount of time before the next frame begins.
This allows you to throttle Editor performance, and reduce consumption
of CPU resources and power.
For example, if you want to test the Editor’s performance, set this
property to No Throttling. If you want to conserve power (for example,
to prolong battery life on a laptop computer), set this property to
one of the throttled modes.
In throttled modes, the Editor stops idling if you interact with it
(for example, by panning in the Scene view).
Note: In Play mode, Unity ignores any throttling that you apply using
this preference. The editor updates as fast as possible unless you
enable the VSync option in the Game view, or set
Application.targetFrameRate to a value other than –1.
Unity Docs
Related
I am trying to develop a background utility for Windows with the following features :
store position of open application windows for the current display resolution Rx
scan for any change in display resolution Rx -> Ry (e.g. related to docking/undocking my laptop to/from its docking station)
wait for Windows to finish automatic repositioning of the windows (messy desktop reconfiguration; see below)
restore the previously stored windows position for the display resolution Ry
I have successfully implemented features 1, 2 and 4. For feature 2, I intercept the WM_DISPLAYCHANGE event, telling me that a change in display resolution has occurred.
Immediately after the resolution change occurs, Windows automatically repositions many (if not all) windows to meet the new display constraints. This phase (which I call the messy desktop reconfiguration) lasts several seconds and causes a real mess in the windows layout (which is why I want to develop the said utility).
Now I have the following problem:
I can only apply feature 4 when Windows has finished the messy desktop reconfiguration phase (3). However, I have not found any acceptable way to detect when this phase ends.
I did try to observe the automatic repositioning of windows by intercepting the EVENT_OBJECT_LOCATIONCHANGE event, and then wait for the last one to be issued. It almost works: I am well notified when it happens, however, I'm not satisfied because the time interval between each of these events can last up to 5 seconds (or maybe more), which makes it difficult to identify the last of these events by a classic timeout method. Currently, I have set this timeout to 10 seconds, which is not satisfactory because I would like to be able to detect the end of the messy desktop reconfiguration earlier and, more importantly, there is no guarantee that another of these events will not occur after the timeout.
Does anyone have a solution to properly detect the end of automatic windows repositonning, when changing the display resolution ?
I want to occasionally VNC from an old laptop to my main PC. The old laptop's screen resolution (1024x768) is much lower than the PC's resolution (which at the moment is 1280x1024).
A few months ago, I set up x11vnc on my PC so that it would automatically lower the screen resolution when I connected and restore the optimum resolution when I was done. This worked incredibly well (seriously, it was awesome)... until I noticed that the backlight in the LCD attached to the PC seemed to be having issues.
Turns out, all the (very very frequent) screen resolution changes were killing the backlight lifespan, because (like most LCDs) the backlight on this LCD switches off and back on whenever the resolution changes.
Once I realized this I immediately stopped VNCing, which was extremely inconvenient and a major workflow disruption. However I couldn't risk killing my LCD.
So, I am looking for a way to lower the resolution in X11 in a way that does not cause the backlight to flicker.
In other words, I want to change the resolution X11 reports to programs without adjusting the actual screen resolution.
I've already been lowering and restoring the screen resolution for some time, so I am used to windows getting thrown around a bit.
And I fully expect that when the "effective" resolution is lowered, all the windows will probably bunch toward the top-right with a giant empty black area covering a lot of the screen. This is also fine.
I'm aware Openbox has a "margins" system that will affect the state of maximized windows, but I'm not using Openbox. (I'm currently using i3 but this is likely to change in future.)
Ideally I want the solution to be windowmanager-independent. Something that sits between X11 and the WM, ideally. Writing a program that watches floating windows and automatically moves them within a constrained area would be trivial to write - it's adjusting the state of maximized windows I'm stumped on!
I realize this is a particularly tricky question, and appreciate any ideas and suggestions!
I have a watchdog system service that monitors an OpenGL application for crashes, freezes, overheating CPU/GPU, scheduled on/off periods, etc. When the application is not running, due to this being a kiosk-style deployment, I call BlockInput() and power down the display with PostMessage(HWND_BROADCAST, WM_SYSCOMMAND, SC_MONITORPOWER, 2). When the monitored application is running again, the display is unsuspended with PostMessage(HWND_BROADCAST, WM_SYSCOMMAND, SC_MONITORPOWER, -1).
Well, it turns out that if the OpenGL application is launched with the display in low power mode, when the display comes on, vsync doesn't work in the application, regardless of graphics driver settings and calls to wglSwapIntervalEXT(). If the display is unsuspended before the application is launched (with some delay to give it time to actually power up), vsync works just fine. What's the fix here? I need the display blanked so random people in the mall don't see the desktop while the application is launching, but I also need vsync because we're seriously electrical power limited and it makes a huge difference (running top-end NVIDIA cards drawing a lot of juice).
The vsync signal comes from the graphics card, not the monitor, so this makes no sense. Even if the graphics card doesn't send out a vsync signal during the low power display mode, why wouldn't OpenGL latch on to the signal once it reappears when SC_MONITORPOWER is sent to unsuspend? I really need a work-around, as I doubt Microsoft and/or NVIDIA will fix this any time soon, if they're even aware of it (and I'd be going on a fool's errand trying to contact either -- I'm just some poor dev in the middle of nowhere).
I am using BitBlt heavily in my project. I create a number of threads and in each thread I capture the screen by BitBlt. It works great and as expected for now except the following problem.
The problem happens when the user clicks on a running program or for example already opened explorer on the taskbar. You know when you click on the running program on the taskbar, it either minimizes or appears on the screen. The issue that I am talking about happens just in this transition. At that moment, something like an interrupt, all threads stop capturing the screen for a fraction of a second and then they continue capturing. The same thing happen when you move down or up the thing on the volume control window. Could you please shed some light why this is happening and how I can prevent this happening?
Thanks.
Jay
It could be a scheduling issue. When you activate an app, it gets a small, momentary boost in its priority (so that it can seem responsive in the UI). This boost might last about as long as the animation and momentarily pre-empt your screen capture threads.
It's also possible that the desktop manager is serializing stuff, and your bitblts are simply stalled until the animation is over. Even if you've turned Aero off, I believe the desktop window manager may still be in compositing mode, which has the effect Hans Passant was describing in the comments.
If you're trying to make a video from the screen, I think it's going to be impossible to rely on GDI. I strongly suggest reading about the Desktop Window Manager. For example, this caveat directly applies to what you're trying to do:
Avoid reading from or writing to a display DC. Although supported by DWM, we do not recommend it because of decreased performance.
When you use GDI to try to read the screen, DWM has to stop what it's doing, possibly render a fresh copy of the desktop to video memory, and to copy data from video memory back to system memory. It's possible that the DWM treats these as lower-priority requests than an animation in progress, so by the time it responds to the BitBlt, the animation is over.
This question suggests that DirectShow with a screen capture filter might be the way to go.
I've noticed that the running times of my CUDA kernels are almost tripled the moment the screensaver kicks in. This happens even if it's the blank screensaver.
Oddly enough, this appears to have nothing to do with the power settings. When I disable the screen saver and let the screen power off, the performance stays the same. When I set the "Turn off monitor" to "Never" and lets the screen saver kick in, it happens.
Why does this happen?
Is there a way to counteract this phenomena?
Is there a way to tell windows not to kick in the screen saver? (How do media players do it?)
I'm working on XP SP2 x64
Firstly, its interesting that CUDA is so impacted.
But here is the recipe in win32 for avoiding the screensaver kicking in:
A normal approach is to send yourself 'fake' key presses occasionally using the SendInput API, to reset the inactivity timer that triggers the screensaver.
It is possible to stop applications doing this, however, using the SPI_SETBLOCKSENDINPUTRESETS parameter for SystemParametersInfo.
Another approach is just to turn the screensaver off programmatically, using SPI_SETSCREENSAVEACTIVE for SystemParametersInfo. However, this is a global setting for the whole user - what if two programs use this overlapping? Try to avoid this!