I am trying to create an app that will run on both PC (Windows) and Android, however, I am having issues getting the correct screen size on both.
On Windows, the Screen.getBounds() method seems to always return the correct full screen size (i.e. the screen without the taskbar space etc). On Android, using Screen.getBounds() will return a screen size which is much larger than the actual screen size. The only way I can get my app to work correctly on android is to use Screen.getVisualBounds(). However, on Windows, using Screen.getVisualBounds() always returns a size slightly smaller in height than the actual total screen size since it removes the space occupied by the taskbar.
Does anyone know why the Screen.getBounds() returns a much higher value on Android than the actual visible screen?
Thanks.
Screen.getBounds() returns the physical pixels, while Screen.getVisualBounds() returns the logical ones.
While on desktop the difference between those bounds is just related to the presence of the task bar, on mobile device the difference is related to the pixel density or scale, and it can be greater than 1.
This is what those methods return on a Nexus 6:
due to the fact that the pixel density for this device is 3.5.
Back to your initial problem, you will need to use Screen.getVisualBounds() on Android.
But for Desktop, you are free to select the size:
#Override
public void start(Stage stage) {
Rectangle2D bounds = JavaFXPlatform.isDesktop() ?
Screen.getPrimary().getBounds() :
Screen.getPrimary().getVisualBounds();
Scene scene = new Scene(new StackPane(), bounds.getWidth(), bounds.getHeight());
stage.setScene(scene);
stage.show();
}
where JavaFXPlatform comes from Gluon Charm Down, an OSS library that you can get here.
Related
I am working on a small 2d game engine for android-ndk using opengl.
I am facing difficulty on how to change levels, eg. from menu to game screen.
Because the texture ids are not working when loading new textures for game screen using glGenTextures, glGenTextures keeps returning duplicate ids.
// class to bind ndk with java
Renderer.java
void onSurfaceChanged(gl, width, height){
nativeSurfaceChanged();
}
void onDrawFrame(){
nativeDrawFrame();
}
// c++ code
Game.cpp
Screen *screen;
void SetScreen(Screen *scrn){
screen = scrn;
// load textures and create openGL objects (mesh and textures)
screen->Initialize();
}
void Update(){
screen->Update();
screen->Render();
}
NDKActivity.cpp
Game *game;
void nativeSurfaceChanged(){
// initlizes stuff like audio engine, rendering engine
game->Initialize();
// set current screen to main menu
game->SetScreen(new MainMenu());
}
void nativeDrawFrame(){
game->Update();
}
MainMenu.cpp
void Update(){
// if some button is clicked
game->SetScreen(new GameScreen());
}
Now when menu is initialized, everything works fine. But on loading GameScreen textue ids get all mixed up
I basically ported it from windows app where I was using GLUT for creating OpenGL context and there it was working fine.
And please let me know if more info is needed
glGenTextures keeps returning duplicate ids
That should not happen, unless you're deleting the texture with that ID somewhere (else) or end up in a different GL context.
I am facing difficulty on how to change levels, eg. from menu to game screen.
It's as easy as drawing something different. Have two functions draw_menu and draw_level; maybe a draw_loadingscreen and while you're in the loading screen or menu you can replace all the stuff the level uses.
Or you could use a modern streaming approach, where you don't discriminate between menu, loading screen and level and instead upon drawing each frame collect the resources required to draw it, load what's not yet loaded into the GL context and then draw.
Deleting GL resources can be mostly offloaded into a background garbage collection routine. I.e. every frame for every resource of your game you increment a counter; in the frame setup, when you actually are about to draw it you reset the counter to zero. Once the counter hits a certain threshold you delete it. If when loading resources into GL you hit an out-of-memory condition you can work yourself through the resources with the highest counter value downward, deleting them from the GL context.
so as it turns out I was listening for change screen method in OnTouch() method which apparently runs on different thread than opengl and thus every thing was falling apart. So I had to add a callback which would change screen in onDrawFrame and it fixed the issue.
thanks for helping :)
I would like to change the screen size of the ARC-Welder chrome-extention to a 7inch screen - displayed on my pc - to test an app on different screen sizes.
Can this be done using for example the meta-data input?
Similar to a question that I asked recently but I think shares the same answer. It seems that there are few choices when it comes to form factor, and based on the answer to my question I think that you can only use the three form factor presets for now.
(from #Elijah Taylor)
The size of the window is not configurable per activity*, but the orientation is. The two options in ARC Welder that control the window are:
Orientation: This is either landscape or portrait, which will be the default orientation for your app. However, if you set a screenOrientation on your Android activity, this can override the orientation per activity, with the window rotating to compensate. There is a performance cost to rotating this way because the plugin will be rotated via CSS.
Form Factor: This is one of phone, tablet, or maximized. This controls the overall size of your app globally.
but for Chrome 42 and up you can use the metadata {"resize": "reconfigure"} to allow arbitrary user resizing. Your app must be able to relayout with a variety of aspect ratios and resolutions in this mode.
My question at:Android ARC app for chrome, set size of windows for different Activities/Layouts
You can Use this MetaData :
{
"resize":"reconfigure"
}
just thought i'd mention that there is also "formFactor": "fullscreen" if want to test full screen it also works with "resize":"reconfigure"
I'm writing an application for the testing team. What this application does is it lets you take a screenshot of any part of the screen (and then it uploads it to testing team server with comments).
Taking screenshot involves selecting the region on the screen to take screenshot of. For this I'm creating a semi-transparent window and overlaying it over the entire screen. I'm currently using GetDesktopWindow() and GetWindowRect() to get the dimensions of the screen but this doesn't work in multi-screen environments.
How do I overlay a window over all possible screens?
The screen configurations can be pretty exotic, such as:
[LCD]
[LCD][LCD][LCD]
(4 lcd screens - one at the top, 3 at the bottom)
Or
[LCD] [LCD]
[LCD][LCD][LCD]
[LCD] [LCD]
(7 lcd screens - 3 on the right, 3 on the left, 1 in the middle).
Etc.
Does anyone know how I could overlay 1 window all the screens? I wonder what the dimensions would look like in the 1st exotic example, when there is no screen on the top-row left and right?
Perhaps I should be creating one overlay window per LCD screen?
Any ideas?
You can use the EnumDisplayMonitors function for this. Here's a little class that automatically builds a vector of all monitors in the system, as well as a union of them all.
struct MonitorRects
{
std::vector<RECT> rcMonitors;
RECT rcCombined;
static BOOL CALLBACK MonitorEnum(HMONITOR hMon,HDC hdc,LPRECT lprcMonitor,LPARAM pData)
{
MonitorRects* pThis = reinterpret_cast<MonitorRects*>(pData);
pThis->rcMonitors.push_back(*lprcMonitor);
UnionRect(&pThis->rcCombined, &pThis->rcCombined, lprcMonitor);
return TRUE;
}
MonitorRects()
{
SetRectEmpty(&rcCombined);
EnumDisplayMonitors(0, 0, MonitorEnum, (LPARAM)this);
}
};
If you just create one big window using the rcCombined rectangle from that, it will overlay all the screens and the "missing" bits will just be clipped out automatically by the system.
Refer to MSDN for detail about working with multiple monitors:
Multiple Display Monitors
Virtual Screen
Multiple Monitor System Metrics
You can use GetSystemMetrics() with the SM_XVIRTUALSCREEN, SM_YVIRTUALSCREEN,
SM_CXVIRTUALSCREEN, and SM_CYVIRTUALSCREEN metrics to retrieve the rectangle of the entire virtual screen that contains all of the physical screens.
No, that is a bug. Negative coordinates are part of the design, if a user moves a monitor beyond the 0,0 (top, left) point of the primary monitor, this is acceptable, and thus negative coordinates will be applicable for the monitor that was moved beyond left and top of the primary monitor bounding rectangle. The 0,0 primary point is not a virtual screen coordinate reference.
My main NSWindow contains UI restricted to some size range, otherwise it can get corrupt. I restrict the window to a size-range using
[myWindow setContentsMaxSize:maxSize]
[myWindow setContentsMinSize:minSize]
This works fine for user dragging of the edges or size-box.
When the user presses "fullscreen" button, Lion starts an animation that will
Shrink the window below its current size,
in several steps, increase its size until it reaches the full-screen representation size.
If the window started in its minimal size, this animation will shrink it BELOW the defined minimal size, and will corrupt my UI beyond repair (user needs to relaunch the app). My views are receiving setFrameSize: with unsupported size.
My questions
Can this be considered a Cocoa bug?
Am I doing something wrong in my view hierarchy?
Can I somehow prevent the corruption, without replacing the OS standard animation for full-screen?
Why doesn't the standard animation base on a "snapshot" of the window contents, instead of
live-resizing of the whole view-hierarchy throughout the animation? This is surely not efficient.
Is there a simple way to apply another standard transition that will
be non-destructive for me?
Can someone "spare" a few lines of code that will do a simple linear resizing animation that will NOT go below minimum?
Thanks.!
I've also investigated fullscreen animation behaviour and here is how it works:
It is also based on taking snapshots of window's content, but with some improvements. It takes several snapshots on some control points: 512, 1024, 2048 and so on.
In my case to enter full screen 2560x1440, my 400 pixels wide window took 512 pixels snapshot, then 1024 and then 2560 wide snapshot. I don't know whether this behaviour is default for all cases, but this is the result of my own investigation.
On the issue with setting min/max window size:
Minimal window size set up in Interface Builder works for me, but max constraints not. I'm currently working on it and this documented method should work for you:
Place this code into your window delegate.
static const NSSize kWindowMinSize = {100, 100};
static const NSSize kWindowMaxSize = {100500, 100500};
...
- (NSSize)windowWillResize:(NSWindow *)sender toSize:(NSSize)frameSize
{
frameSize.width = MAX(kWindowMinSize.width, MIN(kWindowMaxSize.width, frameSize.width));
frameSize.height = MAX(kWindowMinSize.height, MIN(kWindowMaxSize.height, frameSize.height));
return frameSize;
}
Hope this will help you!
I am trying to make my first app using XNA, and I am having some issues with orientation and coordinates.
By default, my phone emulator is in portrait mode, but (0,0) is in the top right corner, and X and Y seem to be switched. from how I would expect them to be (x goes up, y goes across)
In my code, I try changing the orientation something similar to
SupportedOrientations = SupportedPageOrientation.Portrait;
SupportedOrientations.FullScreen = true;
And when I do this, it fixes the coordinate problems I am having, but then the screen becomes just a little square.
any ideas how to fix this? is this just how it is supposed to be?
Also, does orientation change automatically, or do I need to explicitly add
private void PhoneApplicationPage_OrientationChanging
(object sender,OrientationChangedEventArgs e)
Thanks
In addition to mandating the valid orientations, you should set your PreferredBackBufferWidth and PreferredBackBufferHeight appropriately (480 and 800, respectively, for current WP7). These are both found in the graphics member of the main Game class. You don't need to manually set the orientation.