two programs using gpu acceleration - macos

very noob question here.
I want to create a program that applies real-time effects to the screen
(mac os), such as blurring, color shift, etc..
My question is,
If a software, that is already using GPU acceleration, is running (such as Adobe Photoshop) how likely am I to run into problems?
I guess I'm asking if it's possible for two programmes to access GPU acceleration/OpenCL at the same time?

Multiple programs can safely use the same GPU. You may encounter stuttering or other issues if you're doing something intensive, but otherwise it will be fine. If you want to be safe, you can check CL_DEVICE_AVAILABLE to make sure it's OK to use a device, or use clCreateSubDevices to partition a single OpenCL device into multiple sub-devices that can work independently (just to clarify, this is not required, but does give you finer control)

Related

Pixel drawing disadvantages?

Is there any disadvantage to drawing all one's graphics pixel-by-pixel, as compared to using pre-defined rectangle- or circle-drawing functions? Self-defining such functions is fine. Mainly, I'm just worried about execution speed.
Also, what about sprite sheets? How do they compare to all of the above?
There can be a number of issues:
The OS or library-supplied routines may use hardware acceleration which will be faster than what you're likely to write on your own.
The OS or library-supplied routines have likely been tested better than code you'll write from scratch. (It's already being used by millions of people, so if there was a bug, it probably would have been hit by now!)
You'll have to update your routines when hardware changes, whereas an OS or library can abstract that away. (They have to change theirs as well, but they keep the interface the same so your program keeps running.)
You're unlikely to write routines that are general and faster than those supplied by popular OSes and libraries since they're written by people with a lot of experience, highly optimized, and often written in conjunction with the people who designed the hardware.

Most effective method to use parallel computing on different architectures

I am planning to write something to take advantages of the many devices that I have at home.
Basically my aim is to use the laptop to execute calculations, and also to use my main desktop computer to add more power (and finish the task quicker). I work with cellular simulation and chemical interactions, so to me would be great to take advantage of all that I have available at home.
I am using mainly OSX, so I need something that may work with that OS. I can code in objective-C, C and C++.
I am aware of GCD, OpenCL and MPI, but I am not sure which way to go.
I was planning to not use the full power of my desktop but only some of the available cores (in this way I can continue to work on the desktop doing other tasks that are not so resource intensive). In particular I would love to use the graphic card power (it is an ATI card, so no CUDA), since all that I do mainly is spreadsheet, word and coding with Xcode, and the graphic card resources are basically unused in that scenario.
Is there a specific set of libraries or API, among the aforementioned 3, that would allow me to selectively route tasks, and use resources on another machine without leaving the control totally to the compiler? I've heard that GCD is great but it has very limited control on where the blocks are executed, while MPI is on the other side of the spectrum....OpenCL seems to be in the middle.
Before diving in one of these technologies I would like to know which one would most likely suit my needs; I am sure that some other researcher has already used successfully parallel computing to achieve what I am trying to achieve.
Thanks in advance.
MPI is more for scientific computing large scale many processors many nodes exc not for a weekend project, for what you describe I would suggest using OpenCl or any one the more distributed framework of AMQP protocol families, such as zeromq or rabbitMQ, or a combination of OpenCl and AMQP , or even simpler consider multithreading , i would suggest OpenMP for that. I'm not sure if you are looking for direct solvers or parallel functions but there are many that exist as well for gpu's and cpu's which you can find on the web
Sorry, but this question simply cannot be meaningfully answered as posed. To be sure, I could toss out a collection of buzzwords describing various technologies to look at like GCD, OpenMPI, OpenCL, CUDA and any number of other technologies which allow one to run a single program on multiple cores, multiple programs on different cooperating computers, or a single program distributed across CPU and GPU, and it sounds like you know about a number of those already so I wouldn't even be adding much value in listing the buzzwords.
To simply toss out such terms without knowing the full specifics of the problem you're trying to solve, however, is a bit like saying that you know English, French and a little German so sure, by all means - mix them all together in a single paragraph without knowing anything about the target audience! Similarly, you can parallelize a given computation in any number of ways, across any number of different processing elements, but whether that parallelization is actually a win or not is going to be entirely dependent on the nature of the algorithm, its data dependencies, how much computation is expected for each reasonable "work chunk", and whether it can be executed on a GPU with sufficient numeric precision, among many other factors. The more complex the technology you choose, the more those factors matter and the greater the possibility that the resulting code will actually be slower than its single-threaded, single machine counterpart. IPC overhead and data copying can, and frequently do, swamp all of the gains one might realize from trying to naively parallelize something and then add additional overhead on top of that, resulting in a net loss. This is why engineers who can do this kind of work meaningfully and well are in such high demand. :)
Without knowing anything about your calculations, I would move in baby steps. First try a simple multi-processor framework like GCD (which is already built in to OS X and requires no additional dependencies to use) and figure out how to factor your code such that it can effectively use all of the available cores on a single machine. Once you've learned where the wins are (and if there even are any - if multi-threading isn't helping, multi-machine parallelization almost certainly won't either), try setting up several instances of the calculation on several machines with a simple IPC model that allows for distributing the work. Having already factored your algorithm(s) for multiple threads, it should be comparatively straight-forward to further generalize the approach across multiple machines (though it bears noting that the two are NOT the same problem and either way you still want to use all the cores available on any of the given target machines, so the two challenges are both complimentary and orthogonal).

Is it possible to use GPU for raytracing without CUDA/OpenCL etc?

I'm working on Windows Phone 7 which does not support features like CUDA or OpenCL. I'm new to the GPU side of things, Is there anything on the GPU that I can use to help speed up raytracing? Like triangle intersection tests? Or selecting the correct colour from a texture?
CUDA and the like are really just higher level languages for programming shaders, so any platform that supports programmable shaders allows you some capability to run general purpose calculations on the gpu.
Unfortunately, it looks like Windows Phone 7 does not support custom programmable shaders, so GPU acceleration for a ray tracer is not really possible at this time. Even if it was, it is very difficult to effecticely use a GPU for raytracing because of several very anti-GPU characteristics:
Poor memory coherency (each ray can easily interact with completely different geometry)
High branching factor (shaders work best with code that consistently follows a single path)
Large working set (A lot of geometry has to be accesable in memory at any one time to compute the outcome of even a single ray)
If your goal is to write a raytracer, it would probably be far easier to do completely on the CPU, and only then consider optimizations that are more esoteric.
Raytracing is still a bit slow, even on modern average desktop PC. You can speed it up by shooting just primary rays, but then rasterisation methods will be actually better and faster.
Are you certain, you want to do ray-tracing on a phone, which has even less compute power than PC? They are not designed to do that kind of work.

Detect whether a Quartz Composition in a QCView will be rendered through software or hardware

I have a feeling there are combinations of Cocoa Quartz Compositions and GPUs which can't be handled by the GPU and which fall back on the software renderer, even if Core Image is "accelerated" normally. How would I detect such a situation?
Or more generally, how do I detect that a machine is too underpowered to handle a certain composition of a certain size, without actually playing the composition and measuring the FPS?
(Measuring the FPS through playing the composition in a hidden window is unlikely to work, since the QCView might detect that situation and optimise away the whole operation, or parts thereof. And even if it didn't do that today it might start doing that with the next update from Apple - it'd be an unreliable solution.)
Update: to be thorough I did write some code to test render the composition at full resolution in an ordered out but properly sized window, trying to force the render to happen with [self startRendering];[self snapshotImage];[self stopRendering];. This took an amount of time which looked reasonable at first, until it turned out the slow machine was faster at running this test than the fast one. ;) In reality the slow machine renders the composition at a measly 2.24 FPS vs 27 FPS on the fast machine.
I'm guessing you're asking so that you can make a simpler fallback animation for weaker systems?
One option may be to check the user's hardware string as is mentioned here:
GPU Chipset Detection.
glGetString can return GL_VENDOR, GL_RENDERER, GL_VERSION, or GL_EXTENSIONS. You could theoretically use GL_VENDOR to identify Intel GMA's as too slow, or compare GL_RENDERER to a list of known poor-performing GPUs. If you're writing code for 10.6+ only, you only have to compare to GPUs used in Intel Macs, so the list shouldn't be too long.
This might not be quite the elegant solution you're looking for, but it should do the trick. I would also provide the user with an override to choose the higher or lower quality graphics if they wish.

GDI has been accelerated. Does anyone know when this happened?

To sketch the background of this question : at work we use Dell Precision workstations. My current one has got an NVidia Quadro FX1700.
My team is developing the graphics components for a real time data acquisition system.
So we are always looking out to see if the graphics operations don't use up too much CPU time. For quick checks, we have a couple of test programs that we run, which draw scenes at a specified rate ( e.g. 10 fps ) and we use plain old Task Manager to see where CPU usage is at.
One of these programs is heavy on GDI DrawRectangle calls ( which are filled ). This program always used to consume about 40% CPU user-time, but since about a year or so ( just guessing here ) it only uses about 2-3 % kernel-time. So clearly some hardware acceleration is going on here. And indeed, if I turn HW-accell off, we're back to the original 40% user-time.
All of this is of course good news, because we were already thinking about going to OpenGL. Year after year GDI never got the benefit of hardware acceleration. Until some time ago that is.
Does anyone know anything more about this? Did Microsoft do this? Or is it gfx-card vendor specific?
Edit
Thnx for the answers already ( Ferrucio, Torlack and Rob Walker ) but my question has not been answered yet. We are talking about a filled rectangle here. Probably the most trivial function to optimize : just send a couple of coordinates to the GPU and let it rip. Yet it was always implemented on the CPU side.
So far the answers lead me to believe that NVidia finally saw the light ( after more than 10 years ) and accelerated GDI. And no announcement about this? There's no information to be found on this at all.
My internal customers ask me about the speedup of the graphics, and all I can say is "well, we got lucky".
Edit2
It does seem like it is driver related according to the different answers. So, then NVidia has made crappy GDI drivers for its workstation cards for years. It really was an accepted fact within this company that GDI was not accelerated and all the tests confirmed this.
GDI works by calling various functions in the graphics device driver. There are a core set of functions that every driver must implement. Other functions may be implemented by the driver. If they are not, GDI will perform those functions itself.
If a particular function is not implemented in hardware then there is no point in the driver doing a software implementation of that function since GDI can probably do a better job. GDI is extremely well optimized for performance.
As more functions are implemented in hardware, not only do those functions perform much better, but there is also less work for GDI to do, resulting in less CPU time spent on graphics.
It may also be the case that the graphics card vendor, in an effort to get a card out to market quickly, may not have implemented all possible hardware functions that the card could perform. Later versions of that driver may then implement that functionality, resulting in improved performance.
GDI was accelerated for quite awhile. Far as I recall it did depend on your hardware and drivers to some extent. Why you've seen such a jump in performance only recently seems odd.
However, don't get too happy - GDI hardware acceleration is no longer supported in Vista. The new desktop composition engine doesn't support it. However, in Vista you do gain fast moving of windows since content doesn't always have to be re-drawn by the application (and no tearing I think?).
To some extent, GDI has always been accelerated. Even back in the old Win31 days I remember buys cards (Number9) who's main selling point was hardware acceleration of GDI.
Vista has a new display driver architecture which would provide an opportunity for a dramatic increase in performance. Are you comparing like hardware/OS combinations?
A lot of the 2D stuff has been accelerated for some time, each new major version of windows has changed the way display drivers have worked. I believe it was with XP that windows revamped it's window manager layer. Hard to compare, really, since XP is more similar to windows 2000/NT than any earlier versions.
Some more info on wikipedia, Development of Windows XP.
Windows 2000, certainly, was the first NT-kernel-based windows to include DirectX, and had some graphical improvements as well. Windows 2000 (wikipedia)
I don't believe there have been major changes to the display driver model/2D subsystem between releases. So if you noticed a change like that, it's likely due to something nVidia did.

Resources