The backend selected for Psychopy window is unimportant, I just need to render complex user interfaces within a Psychopy experiment workflow.
Related
I am trying to develop a MacOS application using Xcode and Cocoa. My intent is to create an overlay on the user's screen that is mostly transparent, and does not register input. For example, an application like f.lux tints the colour of your entire screen like a global overlay, but you can still click on-screen items, as mouse clicks go right through (assuming that it's an overlay). How can I get started with achieving a similar overlay/widget?
i was wondering if there was any way that you could hide certain GUI components in the GUI editor, in rad studio. not actually hiding the component on run with .hide() function, i mean just so that i can work on the other components and design. for example if i have a panel that pops up that is reused over and over again with certain elements but that covers my screen i cant work on the other components. is the only solution to create the components dynamically?
I am working on a mac osx control that is OpenGL based. Currently I am using an NSOpenGLView and a CVDisplayLink to coordinate my rendering on a background thread. This works great, but I need to allow Cocoa controls to be displayed over this OpenGL based control.
I realize you can do this with putting your Cocoa controls in borderless windows, however, that doesn't seem like a very good workflow for my users.
Alternatively I can make the view layer-backed and I got that working, however I don't like rendering my OpenGL content on the main thread, sometimes it blocks the main thread when the frame-rate dips.
Are there any samples that show how to achieve the best of both worlds?
The background thread for rendering is completely irrelevant. You just need to enable layer-backing for the views and then the subviews/controls will be composited correctly on top of your OpenGL content. You can also use CAOpenGLLayer for more explicit layering with CALayers.
I'm having some undesired behavior with movable panels in wxpython. I'm using the wxpython Cocoa build 2.9.2.3 for Python 2.7 on Mac OS X 10.6.7. I'm importing wx.aui and trying to create dockable panels.
I have a panel that I've created a wx.aui.AuiManager on and have added two panels, one on top and one on below. For both of them I have disabled the close button. Right now, the panels can be dragged into different dockable positions on the frame or off of the frame to create a floating window. This window shows up as the Mac native MiniFrame with a disabled close button. I do not want users to be able to separate the panels from the main frame.
I have passed .Floatable(False) to each pane's PaneInfo, but this won't allow the panels to be moved around at all, even if I pass a .Dockable(True)
Can I have panes in AUI that are dockable and movable, but not floatable?
I don't know if there's a way to do that or not. It may be a limitation of wx.aui. You should ask on the wxPython mailing list. Or you could try the mostly drop-in replacement: wx.agw.aui (http://xoomer.virgilio.it/infinity77/AGW_Docs/aui_module.html#aui). It fixes a bunch of bugs in the default wx.aui and is written in pure Python.
Through experience I have found that the native windows forms/components don’t like to be changed. I know using Delphi or Visual Studio you are given native windows components to populate a form or window with and then you attach code on events that these components may do (onClick for example).
However, how do all of these programs like Word or google’s Chrome browser alter the standard windows’ window? I thought it was somehow protected?
Chrome seems to have tabs actually on the window’s frame?
I know you can also get toolkits like Swing and QT that have their own controls/components to populate a form. How do these work? (How does the operating system/computer know what a non-native button should act like? For example; Chrome's back and forward buttons, they're not native components?).
I can understand how OpenGL/DirectX window would work because you’re telling the computer exactly what to draw with polygons/quads.
I hope this question is clear!
Windows does not protect GUI elements. Windows and controls can be subclassed to handle various drawing operations in a custom way. For example, windows may override and reimplement the handling of the WM_NCPAINT message to draw a custom titlebar and frame:
http://msdn.microsoft.com/en-us/library/dd145212(VS.85).aspx
Some Windows controls have an "owner-draw" mode. If you use this, you get to draw the control (or at least vital parts of the control), while Windows takes care of responding to user input in the standard way.
Swing ant QT draw their own widgets at a low level using basic primitives, but they also have theme engines which can mimic the native controls.
Qt moved to native controls a while back. As for how swing does it, it gets a basic window from the OS. Then much like Opengl\Directx it does all of the drawing with in that window. As for where to position things that is what the layout managers do. Each manager has a layout style horizontal, vertical, grid, components it has to draw and a section of window it is expected to fill. From there it does some pretty easy math to allocate its space to its controls.
There's no magic: non native controls are simply drawn on a blank window. Or, instead of being drawn they may be represented as one of several bitmaps based on state (ie: a button may be represented as a .png for the normal state, another .png for the pressed state, etc)