Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
Is wglMakeCurrent supposed to be called only once or does it need to be called before every buffer swap?
Can current opengl context be reset by some external thing other then setting it via wglMakeCurrent?
I am here just to narrow the possible problem. I can't post relevant code here because I have no idea which part is relevant.
Currently I have loop that does makeCurrent -> clear -> render. It renders correctly. I tried make context current on initialization without making it current every draw, but it rendered empty screen. Only when I exited window the correct render flickered for one frame. I figured that something is wrong by using nvidia's graphics debugger. The debugger's overlay strangelly flickered. It doesn't do that with other applications.
Is wglMakeCurrent supposed to be called only once
Each GL context can be made current to at most one thread and drawable at every single point in time. If you only use one Window and one GL Context, it is enough to call wglMakeCurrent only once after you created the context and the window.
If you use one context for multiple windows, you have to re-bind it at least once per frame and window. Note that switching the current context or window traditionally implied flushing the GL pipeline, but this can nowadays be prevented via the KHR_context_flush_control extension, making such a scheme much more efficient.
If you use multiple threads but a single GL context, you must push around the context from thread to thread by making it uncurrent in some thread, and making it current again in the new thread, and so on. But that scheme should almost never be necessary. Fur mutli-threaded GL, you should create mutlipe shared contexts, and then, you usually need one wglMakeCurrent per thread.
or does it need to be called before every buffer swap?
Note that the SwapBuffers function is not a GL function (hence also no gl prefix in the name), and therefore, does work independently of the currently active GL context - the function takes the HDC of the window you want the buffer swap to occur.
Can current opengl context be reset by some external thing other then setting it via wglMakeCurrent?
No, not really. There is the graphics reset situation which can be handled via ARB_robustness:
* Provide a mechanism for an OpenGL application to learn about
graphics resets that affect the context. When a graphics reset
occurs, the OpenGL context becomes unusable and the application
must create a new context to continue operation. Detecting a
graphics reset happens through an inexpensive query.
But such a _graphics reset does not unbind the current GL context - the affected context is just not usable any more.
Related
So I`m very new to win32ui, basically just starting. I was once using BitBlt wit python win32api module and as far as I remember to draw on top of display (so any application - if they are opened) I had to get specific context handle. But my memory is hazy on whether it simply was NULL or was it some specific context? Null doesn't seem to work, so I wonder how to obtain that general context? I really want to avoid to create fully transparent un blocking window.
The GetDC API allows you to get a device context for any given window. Alternatively,
If [hWnd] is NULL, GetDC retrieves the DC for the entire screen.
You can use the device context for the entire screen to read from, reliably (with restrictions). Rendering into a device context for a window you do not own won't be reliable, though. While it won't fail straight away, the window owner can overwrite your rendering at any point. There's no way for you to even be notified about this.
If you need to render on top of the screen you will have to create a top-most (transparent) window yourself, and use its device context. Make sure you ask the question: What if two programs did this?
Is there any general-purpose programming equivalent of low-level interrupts in microcontrollers/embedded systems?
I am vaguely familiar with the concept of events (mouse events and the like) which seems similar but not general enough.
Is there a mechanism (native or otherwise), specifically in C/C++, to handle custom events, that is, events whose triggering is decided by a user-defined condition like say, when the mouse-pointer moves into a particular region when a particular user action occurs?
To provide some context, I am working on an OpenCV based interactive project where I would like to trigger specific actions when the user points to a particular place on the screen.
It seems to be a particular waste of computation to check if the pointer is currently at so and so location on the screen in each iteration of the video stream and I would like to automate function calls based on a pre-defined condition.
Or is there any other(more efficient) mechanism by which I could improve this procedure?
Thanks.
There is no interrupt programming in C/C++ as in microP or microC .
If your screen is touch screen then you try to get hold of the SDK of your operating system or API of your OS to get notified when a touch happens. (The OS internally maintains an interrupt table for touch or key pad press or mouse movement. We can program the logic which we want to execute on such an event, nothing more than that..).
If its not touch then you have to monitor the position of the user with a sensor, usually a camera (a web cam). For that you have to check each frame of the camera to decide the position of the user.
EDIT :
What you mentioned is the correct way.Its better to check each frame or else the response of your system will be sluggish. You can assign a counter to 1 and increase it with each frame and reset it on reaching any desired value. This is almost equivalent to a infinite loop.
Or you can accept some key from the keyboard to break out the loop (OpenCV has such functions)
A little more advanced approach is to grab frames from the camera in a different thread than the main thread of your executing program. So all you need to do is to start and stop that thread.
I'm fairly new to GUI programming and I'm trying to write a plotting lib in D to use with some otherwise console-based scientific apps. I'm using DFL as my GUI library.
Assume my plot form has a method called showPlot() that's supposed to display the plot on the screen. I would like to be able to have any thread in my app throw up a plot window and either block until the plot is closed or continue working, without the caller of showPlot() having to know what any other thread is doing with regard to plotting, or what plots were created in the past and may still be on screen. (The internals of showPlot() may, of course, have this knowledge.)
I'm still trying to wrap my head around how GUI libs typically work under the hood. It seems like you're supposed to only have one GUI thread, and one main form. I'd appreciate answers at the language/library-agnostic design pattern level in addition to language/library-specific ones.
Edit: To emphasize, this app has no GUI besides the plots that it throws up at interesting points in its execution. It's basically a console app plus a few plots. Therefore, there's no well-defined "main" form.
What you're going to be able to do will likely rely on how DFL works. Typically, in a GUI app, there's an event thread which handles all the events in your app - be it repaint events, button click events, mouse click events or whatever. That thread calls the event handler which is registered to handle the event (that frequently being the widget which was clicked on or whatnot). The problem that you run into is that if you try and do too much in those event handlers, you lock up the event thread, so the other events (including repaint events) don't get processed in a timely manner. Some GUI toolkits even specifically limit what you're allowed to do in an event handler. Some also limit certain types of operations to that specific thread (like doing anything - especially object creation - with actual GUI-code like the various widget or window classes that the toolkit is bound to have).
Typically, the way to handle this is to have the event thread either fire off separate threads in the event handlers and let those other threads actually handle the events, or you set some amount of state in the event handler, a separate, already-running thread is alerted to this change in state (possibly using the observer pattern), and it handles things appropriately based on that state. In either case, what the event handlers themselves do is generally fairly limited.
How GUIs work is generally very event-based. The program handles events from the user and the system and doesn't tend to a do a lot without being signaled to do it. Frequently, GUI apps don't do anything until they're told to by an event (though there are plenty of cases where a background thread is doing some sort of work separate from the events). What you're trying to do doesn't sound particularly event based, so that complicates things a bit.
My guess would be that you would need to have your app create a new window every time that you want to throw a new plot up. That window would likely be a child window of the main window which would be hidden since you obviously don't need it, and presumably DFL requires you to have a main window of some kind. There's a decent chance that each thread which wanted to create a window would have to tell the main GUI window to do that, but it depends on DFL really. It's also possible that DFL allows any thread to create new GUI elements (such as a new window).
Regardless, it would almost certainly be the main event thread which actually handles the window. The thread which wants to create a window and populate it would likely have to create the window (either directly or indirectly), and then update the state of the window by updating a set of shared variables so that the event thread could appropriately repaint the window. The most that original thread would likely to do actually repaint the window is send a repaint event to window after it's updated the state of the shared variables that hold the data necessary to paint the window. It wouldn't handle the painting itself.
As for the thread blocking, it would likely have to do busy-waiting until the window closed event was received by the event thread, and some shared variable was updated, or you could make it go to sleep until the event thread woke it up after receiving the window closed event. In the case where you didn't want it to block, it just wouldn't care about waiting to be told about the window-closed event and would keep chugging along.
In any case, hopefully that's enough information to put you in the right direction. Usually events drive everything, with everything hanging off the GUI, so to speak. But in your case, your app is using the GUI more like stdout. So, you might do better to first work on some small, fairly stupid GUI apps that are actual, normal, event-based GUI apps (like a Tic-Tac-Toe game or something) in order to get a better handle on how the GUI stuff works before trying to get it to work in a somewhat less standard manner like you are.
GUI frameworks typically have their own own event-loop, and call the application code as callbacks in reaction to external events(button clicks, timers, redraw, ...). The main difference between a "typical" console application and a GUI application is that you give up the control about when a function is called to the GUI framework.
Threads are typically used when the application code contains some long-running process, which cannot be interrupted into smaller chunks(large computations, copying files, take control over the world, ...). Then one thread is used to keep the GUI responsive, while the work is done in a separate thread. The main problem is to keep both threads correct in sync, since most GUI toolkits are not capable to handle calls from more than one thread. When you have only small work parts, which don't block the GUI for long time (<0.1s), then it is better to stay without worker threads.
I also strongly suggest to separate the GUI code and the application logic apart from each other. Having GUI code and application code mixed together is a maintenance nightmare.
I'm new to U/I programming, and I'm trying to get started with OpenGL. When I run an example program which creates a new OpenGL window with GLUT, it works fine. Good. However, in the context of another program, where I have to respond to Draw events (on Windows), with a device context passed to me - and where I might not have GLUT available - my confusion is this:
When is a device context created and destroyed? Can I draw to any device context given to me, or only some of them (and how do I know)?
Do I have to create my own OpenGL context and use that to draw to, or can I use a "current" OpenGL context? Do I have to re-create the context every time a draw event is sent?
Basically my question is, given a situation where I am sent "Draw" events, how often do I attempt to create OpenGL contexts and how does this relate to the creation/destruction cycle of device contexts?
In general, it's usually safe to think of a single OpenGL context as a window, especially on windows.
A device context will (typically) map to an Window Handle (HWND). It's actually a DC (HDC is the handle), but normally you associate one HDC with a single HWND. In Windows, you'll create a window to use based off the window on screen where you want to render.
Typically, you'll reuse this device context for the entire runtime of your application. If you want to render into a different window, you'll need to generate a device context (HDC) for the new window handle. Also, offscreen rendering is a bit different, since you'd create a compatible device context for that, as well.
As for your questions:
1) When you create the window where you want to do the rendering, you'll grab a device context, and use it for the lifetime of that window.
2) You'll want to always use the device context created for the window where you are rendering.
I've been developing Web applications for a while now and have dipped my toe into GUI and Game application development.
In the web application (php for me), a request is made to the file, that file includes all the necessary files to process the info into memory, then the flow is from Top to Bottom for each request. (mainly)
I know that for Games the action happens within the Game Loop, but how are all the different elements of a game layered into that single loop (menu system, gui, loading of assets and the 3d world) with the constant loading and unloading of certain things.
Same for GUI programs, i believe there's an "application loop" of some sorts.
Are most of the items called into memory and then accessed, are the items linked in and loaded into memory when needed?
What helped me develop web applications faster is when i understood the flow of the program, it doesn't have to be detailed, just the general idea or pseudo code.
There's almost always a loop in all of these - but it's not something you would tend to think about during most of your development.
If you take a step back, your web applications are based around a loop - the Web Server's accept() loop:
while(listening) {
get a socket connection;
handle it;
}
.. but as a Web developer, you're shielded from that, and write 'event driven' code -- 'when someone requests this URL, do this'.
GUIs are also event driven, and the events are also detected by a loop somewhere:
while(running) {
get mouse/keyboard/whatever event
handle it
}
But a GUI developer doesn't need to think about the loop much. They write 'when a mouse click occurs here, do this'.
Games, again the same. Someone has to write a loop:
while(game is in progress) {
invoke every game object's 'move one frame' method;
poll for an input event;
}
... while other code is written in a more event-driven style: 'when a bullet object coincides with this object, trigger an explosion event'.
For applications and to a lesser extent Games the software is event driven. The user does "something" with the keyboard or mouse and that event is sent to the rest of the software.
In Games the Game Loop is important because is focused on processing the screen and the game state. With many Games needing real time performance. With modern 3D Graphics API much of the screen processing is able to be dumped onto the GPU. However the Game State is tracked by the main loop. Much of the effort of a team for a game is focused on making the processing of the loop very smooth.
For application typically heavy processing is spawned on onto a thread. It is a complex subject because of the issues surrounding two things trying to access the same data. There are whole books on the subject.
For applications the sequence is
user does X, X and associated information (like X,Y coordinates) is sent to the UI_Controller.
The UI decides which command to execute.
The Command is Executed.
The model/data is modified.
The Command tells the UI_Controller to update various areas of the UI.
The UI_Controller redraws the UI.
The Command returns.
The Application waits for the next event.
There are several variants of this. The model can allow listeners to wait for changes in the data. When the data the listener execute and redraws the UI.
As far as game programming went I was merely a hobbyist, however this is what I usually did:
I had an object that represented a very generic concept of a "Scene" in the game. All the different major sections of the game derived from this Scene object. A scene could really be anything, depending on what type of game it is. Anyway, each more specific scene that derived from scene had a procedure to Load all of the necessary elements for that scene.
When the game was to change scenes, the pointer to the active scene was set to a new scene, which would then load all of its needed objects.
The generic Scene object had virtual functions such as Load, Draw, and Logic that were called at particular times in the game loop from the active scene pointer. Every specific scene had its own ways of implementing these methods.
I don't know if that's how it's supposed to be done or not, but it was a very easy way for me to control the flow of things. The scene concept also made it easy to store multiple scenes as collections. With multiple scene pointers stored in a stack of sorts at one time, scenes could be stored in reserve and maintain their full state when returned to, or even do things like dim but to continue to draw while the active scene drew over them as an overlay of sorts.
So anyway, if you do it like that it's not precisely like a web page but I guess if you think about it the right way it's similar enough.