I am looking for a way to easyly load images from the web in Cocoa (for mac, not iOS)
Any pointer?
Thanks
I know this is late to answer, but i made a NSImageView subclass to downloads images asynchronously. It's the PVAsyncImageView.Check it out here: PVAsyncImageView
NSURLConnection is the Cocoa API for the job.
However, some people prefer ASIHTTPRequest (I'm one of them) so you should check that out too. The developers of this library have really good docs as well!
CGDataProviderCreateWithURL can be used to create a CGImageRef. using this, you shouldn't have to set up with your own url connections (but you should not call it in realtime or from the drawing thread because it will block).
once you have a CGImageRef, you can either use the result directly or use it to create an NSImage.
Related
I'm trying to have a list of the fonts installed on a Surface.
However, even after including , I can't call EnumFontFamilies.
Why is that? How could I, and if I can't, what can I do to achieve similar functionnality.
Thanks.
From a thread on the Dev Center forums, it looks like you could do this via C++/DirectX as a WinRT component (the code here would be a start).
or take a look at Christophe Wille's WinRT snippets project on GitHub
EnumFontFamilies() is a winapi function. Very few of which you can use in a Store app, certainly not this one. Technically you can hack the macros that stop you from using the function but then you won't pass the Store validation.
You'll find font related methods in the Windows.Globalization.Fonts namespace, but not what you are looking for. Note the namespace name, WinRT no longer ignores the fact that the fonts that are available and usable on a machine have a lot to do with the language a user speaks. Or rather, the glyphs used in written text for that language. Arbitrarily picking in font just doesn't work well for the billions of people that live in Asia.
It seems that there is no way to use outlet collections in a mac application. Why not?
Just put your image views into an array -- if they're all subviews of the same superview, you can loop through that super view (testing for the fact that they're NSImageViews or checking a tag value) and add them. This way also has the advantage that you don't need to create IBOutlets for them.
Whether bindings are appropriate for your problem depends on what you want to do with the image views. I would need more information to comment on that.
Generally if there's a feature that you want on iOS or OS X frameworks and tools its best to raise a feature request on bugreport.apple.com.
Should enough people request that thing then maybe the engineering managers might pay attention ( or not )
But yes the other answer about Bindings is a good suggestion. Bindings are very cool and useful.
A Mac Application makes use of BINDINGS, which is quite useful and doesn't exist in iOS SDK. You can achieve the same.
Read the Cocoa Bindings Programming Topics
What little information I managed to dig out on developing in-game overlays (similar to what Steam does) mentions having to intercept calls graphics API's frame swapping function, and hook my own drawing routine in it.
This appears to be what Mumble (a gaming VoIP) is doing. Since I've never done anything that involves hooking, and since I don't really have much experience with DirectX, I'm wondering if there is some sort of SDK, or even just a more readable example than Mumble that also implements input, that demonstrates how to implement an interactive in-game overlay. Mumble is great, but I don't seem to be able to wrap my head around it, especially around the more interesting things it does in order to hook its stuff properly.
Also, if you have more detailed info on how to do this on Mac and Linux... :-)
Maybe GLIntercept could give you some inspiration.
It provides an openGL.dll file that you put in your app's folder. Windows loads this dll instead of system32's one because of priority rules. GLIntercept forwards all calls to the system32's dll, but logs them all meanwhile.
So, you could implement your own glSwapBuffers() which renders some more things, and then forwards the call.
Source code is available as well.
I upvote your question, and I'm interested in you future discoveries... feel free to repost when you have more info :)
Start by designing your overlay without intercepting graphics API. Keep in mind that the key input must use global hooks.
Integrate it to the application using a Direct3D interceptor dll. Google it to retrieve a base code.
Edit:
DirectX: Intercept Calls to DirectX with a Proxy DLL will give you downloadable source code.
OpenGL: Chromium might be a good start.
I'm still trying to get a handle on Cocoa (both in Obj-C and MacRuby), and I'd really appreciate seeing how to download a file with ASIHTTPRequest (or without it) and MacRuby. Ideally, I'd like to be able show the progress inside a progress bar too.
Must use a cocoa method for downloading, since open-uri in MacRuby is borken.
Thanks for your help.
Here is an example app doing exactly that using HotCocoa: http://github.com/richkilmer/hotcocoa/tree/master/examples/download_and_progress_indicator
You would have to convert it to normal Cocoa but if you look at http://github.com/richkilmer/hotcocoa/blob/master/examples/download_and_progress_indicator/lib/application.rb you will see the main callbacks defined.
You might want to ask your questions in the MacRuby mailing so people involved with the project can help.
Matt
p.s: The cocoa IO methods are way more stable and efficient than Ruby's. Also keep in mind that you want to do async calls, something net/http won't help you with.
Here are more explanations and an example from the book I'm writting: http://macruby.labs.oreilly.com/ch03.html#_urls_requests_connections Hopefully that will help.
I am writing a sort of screen-recording app for Windows and wish to know when and which regions of the screen/active window have changed.
Is there a Windows API I can hook to get notified of screen changes?
Or would I need to manually write something like this? :(
I always figured that Remote Desktop used some sort of API to detect what regions of the screen had changed and only sent back those images - this is exactly the behavior that I need.
I don't think there is an API in Windows that can tell you which parts of the screen have changed.
One possible way is using a video mirror driver like UltraVNC uses.
I think you'll find some clues here Screen Event Recorder DLL/Application, here About Hooks, and here Writing a Macro Recorder/Player using Win32 Journal Hooks
It would seem that you're going to have to do a fair bit of work to detect screen changes. This posting at tech-archive.net for instance. With this you can copy to RAM a reference screen and then take another and compare the two. It'd be up to you to define what kind of a change is a meaningful one. It's similar material to this article on desktop capture.
I think Remote Desktop streams GDI like commands. I don't know how they capture them in the first place.
Thanks for your help everyone. I ended up writing an image differencing class which seems to calculate the changed rectangles suprisingly quick. I've posted the gist of how it works here.
At the moment I'm just doing it in a timer but planning to do it after input events too.
Thanks heaps for your links Boost - I've only just looked at this thread again so I'll check them out soon.