I'm using NSOpenGLContext to optimize drawing AU plugins. There are multiple plugins and each can have multiple instances. So each plugin creates a global NSOpenGLContext and attach particular NSView contexts to it, so that the textures do not need do be duplicated.
Problem: When I open one plugin, it's ok. I open a different on, it's ok. Now I release the first one, it destroys all resources => the second one looses its textures!
I checked that both context are different, sharing is different, they both use makeCurrentContext in both lockFocus and drawRect. Any ideas what is wrong here?
Btw.I got the same thing working using AGL and WGL (on Windows), both without problems, so it is just Cocoa as usual.
Ok I think I found a solution - one needs to call [NSOpenGLContext clearCurrentContext]; after any painting or any processing. Why? No idea... I'm considering it yet another bug in Mac OS X...impossible pseudosystem...
Related
I have a situation where I need to embed some 3rd party closed-source Unity applications into our own. I'm injecting a DLL which creates a DX11 shared texture from their swapchain. This part works and it's done.
Additionally I want to hide the form wrapping the Unity app (you can set their parent handle with a command line luckily) so I can have 100% control what happens to its texture in our own app (+ so it wouldn't interfere with the overall look of our own app). Which also works fine, I get the texture without a problem even when the Unity form is completely off-screen.
Now my problem is that this Unity application requires to be used with multitouch and after some fair amount of googling/stack overflow reading I kinda concluded that there's no way (or I haven't found any way) to compose valid WM_POINTER* messages just for one window in Windows. (this is kinda supported by the fact that you need to call a separate WinApi function to get all the data of a Pointer/Touch based on their ID which is received in the lParam of WM_POINTER* message)
So I'm using the TouchInjection Windows API (InitializeTouchInjection and InjectTouchInput) (information about these API's on the internet are misleading at their best but I solved actually all their quirks) and it works fine if the Unity form is visible on the screen. Or in other words if the touch position is inside the screen boundaries.
And now finally the problem: When I specify an offscreen coordinate for the injected touches, I get an ERROR_INVALID_PARAMETER (87 / 0x57) system error message. Otherwise it works. Is there a way to turn off this check in windows? Or anybody who solved this problem before some other way?
(Our app is not an end-user one, we have full control over the environment it runs inside, system-wide modifications are also OK.)
Thanks in advance!
You can't turn off error code checking because it's a return value inside the function, and represents a failure of the function call, then the function return or change nothing but error code. If the error code can be disabled, then what the status of the function call? succeed or fail?
You need to check coordinate manually and detect what to do.
I'm looking for a way to embed another application into my own view.
The business reason is that the company has many small Electron apps (basically a small portable web program with a self-contained browser) that the company wants to embed inside an OS X program. These Electron apps would ideally integrate and display inside a subview seamlessly, so they look like little web frames inside our larger program.
I think programatically it would be easiest to open another program as a subview, but I'll take whatever I can get. Maybe even capturing it's NSWindow somehow. (Electron source is available so it is easily discoverable.) Maybe a way to dock the other program inside mine, or (getting more desperate) finding its view and sending commands to constrain it's size and location on top of mine.
So far all I've found says it is not really possible. I've found I can take the more desperate course. I can launch a process, find its view, and position it inside a spot on my display; when the window is moved or the content is scrolled send messages to move the other window. But that isn't really integrated, the menu stays separate, etc., but I cannot incorporate it.
Any ideas or helpful implementation details?
EDIT 1: Thanks for those responses. How about if we could have the electron apps expose their NSWindow somehow? Could that be leveraged? I'm thinking the application could send messages and (somehow, not sure exactly) to set the parent window inside this one. In Windows API it is much easier since you can call SetParent on anything, even items inside different processes. But Cocoa seems more difficult.
This isn't really a thing you can do in Mac OS X. Applications are not "composable" in the way you're hoping for - while it is possible to share a view with a subprocess under certain very specific circumstances (e.g, Safari or Chrome tab renderers), this requires the subapplication to be written in a very specific way to permit that. It's not something that would be feasible in the situation you're describing.
If you have access to the source of these Electron apps, consider combining them into a single overarching Electron application. Alternatively, if it's not possible for these applications to coexist within a single Electron app, you may want to consider using something like Chromium Embedded Framework to build your wrapper application; note, however, that this may require you to implement parts of the Electron framework yourself.
You cannot do that. Cocoa requires you to have only one NSApplication instance per UI app. So you will to fork/exec out new process and launch your applications.
If you can recompile the source code then you can create custom subclass of NSApplication and use that custom class in all the applications or you can create NSthread of other applications without NSApplication instance and go from there.
I am developing the Mac OS port of an iOS App and do face a problem with NSManagedObjectContexts when using NSArrayControllers in the Storyboard based Cocoa app.
It's kind of a follow-up question to:
Storyboard with TabViewController in OS X Application - Core Data Array Controllers in each scene?
I do have some ViewControllers presented like in a TabBarController, showing the same CoreData Entities. They are loaded through NSArrayControllers, that are hooked up with InterfaceBuilder.
From my existing knowledge, it was no problem to get the data on the screens. Even editing and saving to CoreData works.
But I realized, that every Storyboard scene got it's own instance of the NSArrayControllers and each its own NSManagedObjectContext.
When changing and saving the data on one screen, it is NOT updated on the other screens, that are all bound through the IB bindings and work in all other cases. They are just showing the data, they have loaded initially and are not updating automatically.
I think the problem is, that the changed data from contextA is not merged (or synced) to the other contexts of the other screens.
What is the best way of doing that? Should I use the NSManagedObjectContextDidSaveNotification for this?
That would mean, I would have to write much code, to manually start merging the changes from one context to all the other NSManagedContexts. Does smell really bad to me. I think there must be a much easier way, that I am not aware of and unable to find out about.
If you do have a hint for me, please just stick me in the right direction.
Thanks for that already.
Problem solved, I did a thumb error with Cocoa Bindings: I just dragged an object in the storyboard to every scene and set that to the AppDelegate. I just instantiated several AppDelegates with this, very bad idea! I corrected this, referencing the AppDelegate through properties on my ViewControllers and now it works as it should be. IB just has its little edges, where one has to be totally aware, about what is happening.
Well, currently chrome has out of process plugins. and firefox 4 will use same model.
That means plugin process is now seperated from browser process.
Plugin process might NOT have window at all.
My plugin is based on NSView.
Before cocoa event model, when I can access NSWindow in browser process, All I have to do is just add my_view as a subview of the contentView in the window.
[[the_window contentView] addSubview:my_view]
I do NOT need to process events myself. It worked itself.
But now, I convert NPCocoaEvents into NSEvents in event process code.
Do I have to change it myself?
Also some instance of NSEvents, I can not make them for example, wheel mouse events.
What should do I do?
Did I approach a wrong way?
Please enlighten me.
Do I have to change it myself?
If you plan to use the approach of forwarding NSEvents to your existing NSView then yes; there's no way to get access to the original NSEvents. They don't exist in the plugin process.
Another option would be to move away from trying to use native controls, and do your own drawing and event handling. This is the way most NPAPI plugins work.
A third possibility would be to open a separate window for your plugin content, and put your view in that window. This isn't technically supported by NPAPI, and it won't be perfect, but it might be a short-term way to get your plugin working while you explore long-term options.
Did I approach a wrong way?
Yes, what you were doing before was an unsupported hack, and not how NPAPI was intended to be used. Adding a view to a browser's window assumes things about the browser's view hierarchy that are implementation details, and subject to change at any time.
One option would be to use the FireBreath framework to create your plugin, as it already has a lot of the abstraction for negotiating the event and drawing models as well as an event abstraction. It's pretty straightforward to get up and going.
Now I am porting some OpenGL tutorials from win/glut to cocoa/mac os x. In the windowed mode everything works, but when context switches to fullscreen, screen may be empty (only clear colour)! For example: in the first, second, third times cube is, but in the fourth time cube is not. Even if app launches in fullscreen without sharing context. I don`t understand.
Xcode 3.2.1, Mac OS X 10.6.2
source link
It looks like AFController's enterFullScreen method probably needs to set up the OpenGL context ([scene initGL]).
Also, awakeFromNib may be called before the application is ready to draw, so perhaps it's not the best place for [scene initGL]. I suggest implementing NSApplication's delegate method, applicationDidFinishLaunching:, and moving [scene initGL] there. Just to be safe, you might also try calling NSOpenGLContext's makeCurrentContext from there as well.