I'm trying to render a RAW image with Core Image on OS X, using
[render:toIOSurface:bounds:colorSpace:].
However, often, there are some black tiles in the result (as in the example below).
The location of the black tiles is not consistent.
The problem seems to occur more frequently on older Mac Models.
This happens both on 10.9 and 10.10 (haven't tried older versions).
Any ideas for a solution?
We ran into a similar problem when combining the output of CIRAWFilter and a lanczos filter. We reported it to Apple, but haven't heard back yet. It turns out to be such a problem as you don't really need lanczos with CIRAWFilter output and can more easily use a inputScaleFactor to downsize your output.
Related
We've encountered a strange problem on newer laptops using built-in graphic cards.
In order to draw true-type fonts we obtain the glyph outlines using wglUseFontOutlines and then draw them with in glRenderMode(GL_FEEDBACK).
Afterwards we parse the feedback buffer. This has worked for many years.
Now we have a problem with glyphs containing holes (only on platforms with built-in graphic cards):
wglUseFontOutlines works perfectly. If we just draw the returned display lists, everything is fine. However, the token stream generated with GL_FEEDBACK is corrupt. The debugger shows nothing unusual, all functions return with success and the parsing itself works fine too. It is really the binary data generated by GL_FEEDBACK mode, which is wrong.
Has anyone else encountered this problem?
And is there an alternative way to obtain outlines and fillings for true type fonts on Windows?
I'm just guessing into the blue here: The GL_SELECT and GL_FEEDBACK rendering modes were usually not supported by widespread GPU driver OpenGL implementations. Only a handful graphics cards from the previous century actually did support these rendering modes. Hence you would almost always fall back into a software implementation when using those modes.
However given modern GPU's vastly more flexible feedback mechanisms, the latest drivers could actually try to implement those rendering modes using GPU features (somewhat weird, because those modi have been removed from modern OpenGL profiles). Anyway, this could be the reason why you're experiencing these problems.
In order to draw true-type fonts we obtain the glyph outlines using wglUseFontOutlines and then draw them with in glRenderMode(GL_FEEDBACK). Afterwards we parse the feedback buffer.
That's a cool Rube-Goldberg machine. Why don't you simply cut the middleman and obtain the glyph outlines directly using the appropriate Windows GDI functions (GetGlyphOutline) for this? This is what wglUseFontOutlines is using internally anyway.
The servers where I work upgraded some libraries and now I am getting a weird behaviour out of ImageMagick version 6.7.8-9.
I am using the command
composite -compose Multiply bkg.gif overlay.gif output.gif
which used to put overlay.gif, which is a mostly white image, on top of bkg.gif. Now the same thing is happening but bkg.gif is negated! I have tried to change from Multiplyto Screen, which, according to the docs is the same thing but negates both images and back, but the output was the same.
I have worked around this by negating image bkg.gif before doing the same operation but this is not correct and I still have to do this is many scripts that use the same command, so I would like to effectively solve the problem or at least understand it.
Why is this happening? I apologize but I cannot provide the images we are using.
This is a weird situation. In Xcode's IB, I have a NSTableCellView subclass that I've built. It looks like this:
And when I run the app on my Mac, it comes out exactly as I would expect:
However, if any other Mac runs my app, every text field drops by something between 5-10 pixels:
Notice how the image view on the top right, and the hairline separator remain correctly positioned. This is just the text fields.
My brain balks at solving this because I can't guess what the cause is: my Mac is retina (hence right now the larger-looking image from my Mac) but I spend most of my time developing on a non-retina LED cinema display. I've tested this on three other Macs than my own, and the results are the same; it seems my own Mac is the outlier.
Any guesses as to the cause of this layout discrepancy?
UPDATE: I am using springs and struts to lay out the app. But I also tried using AutoLayout on one of my NIBs in case that was related. But the results were identical.
Found the answer! Turns out I had an additional copy of Avenir installed on my Mac which nobody else had. That Avenir has different metrics associated with it, and once I disabled the non-system-provided Avenir, my Mac is now showing the same screwed-up layout as every other computer does. Yay?
How can I capture the screen with Haskell on Mac OS X?
I've read Screen capture in Haskell?. But I'm working on a Mac Mini. So, the Windows solution is not applicable and the GTK solution does not work because it only captures a black screen. GTK in Macs only captures black screens.
How can I capture the screen with … and OpenGL?
Only with some luck. OpenGL is primarily a drawing API and the contents of the main framebuffer are undefined unless it's drawn to by OpenGL functions themself. That OpenGL could be abused was due to the way graphics system did manage their on-screen windows' framebuffers: After a window without predefined background color/brush was created, its initial framebuffer content was simply everything that was on the screen right before the window's creation. If a OpenGL context is created on top of this, the framebuffer could be read out using glReadPixels, that way creating a screenshot.
Today window compositing has become the norm which makes abusing OpenGL for taking screenshots almost impossible. With compositing each window has its own off-screen framebuffer and the screen's contents are composited only at the end. If you used that method outlined above, which relies on uninitialized memory containing the desired content, on a compositing window system, the results will vary wildly, between solid clear color, over wildly distorted junk fragments, to data noise.
Since taking a screenshot reliably must take into account a lot of idiosyncrasy of the system this is to happen on, it's virtually impossible to write a truly portable screenshot program.
And OpenGL is definitely the wrong tool for it, no matter that people (including myself) were able to abuse it for such in the past.
I programmed this C code to capture the screen of Macs and to show it in an OpenGL window through the function glDrawPixels:
opengl-capture.c
http://pastebin.com/pMH2rDNH
Coding the FFI for Haskell is quite trivial. I'll do it soon.
This might be useful to find the solution in C:
NeHe Productions - Using gluUnProject
http://nehe.gamedev.net/article/using_gluunproject/16013/
Apple Mailing Lists - Re: Screen snapshot example code posted
http://lists.apple.com/archives/cocoa-dev/2005/Aug/msg00901.html
Compiling OpenGL programs on Windows, Linux and OS X
http://goanna.cs.rmit.edu.au/~gl/teaching/Interactive3D/2012/compiling.html
Grab Mac OS Screen using GL_RGB format
Very large images will not render in Google Chrome (although the scrollbars will still behave as if the image is present). The same images will often render just fine in other browsers.
Here are two sample images. If you're using Google Chrome, you won't see the long red bar:
Short Blue
http://i.stack.imgur.com/ApGfg.png
Long Red
http://i.stack.imgur.com/J2eRf.png
As you can see, the browser thinks the longer image is there, but it simply doesn't render. The image format doesn't seem to matter either: I've tried both PNGs and JPEGs. I've also tested this on two different machines running different operating systems (Windows and OSX). This is obviously a bug, but can anyone think of a workaround that would force Chrome to render large images?
Not that anyone cares or is even looking at this post, but I did find an odd workaround. The problem seems to be with the way Chrome handles zooming. If you set the zoom property to 98.6% and lower or 102.6% and higher, the image will render (setting the zoom property to any value between 98.6% and 102.6% will cause the rendering to fail). Note that the zoom property is not officially defined in CSS, so some browsers may ignore it (which is a good thing in this case since this is a browser-specific hack). As long as you don't mind the image being resized slightly, I suppose this may be the best fix.
In short, the following code produces the desired result, as shown here:
<img style="zoom:98.6%" src="http://i.stack.imgur.com/J2eRf.png">
Update:
Actually, this is a good opportunity to kill two birds with one stone. As screens move to higher resolutions (e.g. the Apple Retina display), web developers will want to start serving up images that are twice as large and then scaling them down by 50%, as suggested here. So, instead of using the zoom property as suggested above, you could simply double the size of the image and render it at half the size:
<img style="width:50%;height:50%;" src="http://i.stack.imgur.com/J2eRf.png">
Not only will this solve your rendering problem in Chrome, but it will make the image look nice and crisp on the next generation of high-resolution displays.