Using QCView and iSight to capture image - cocoa

I have a QCView that loads a Quartz file which gives you iSights feedback (basically like a QTCaptureView)
Everything displays fine
The button simply takes a snapshot using the following simple lines of code
- (void)takePicture:(id)sender {NSImage *currentImage = [outputView valueForOutputKey:#"ImageOutput"];
[[currentImage TIFFRepresentation] writeToFile:#"/Users/hendo13/Desktop/capture.tiff" atomically:NO];}
The exported image however has some very wonky colouring issues like so:
http://kttns.org/gjhnj
No filters of any sort have been applied. Does anyone know what is causing this?

It's inverted. You can use the CIInvert filter to correct it (assuming there's no way to correct the actual output of the QC view).
Oh, and I think the blue and green alpha channels are the wrong way around, too (possibly an endianness problem?). If you go with the CIInvert solution, you can use CIColorMatrix to rearrange the channels, swapping blue and green back to their proper places. Here's a tutorial I wrote for it—I wrote it for the user interface in Core Image Fun House, but using it programmatically shouldn't be too hard once you understand how the filter works.

Related

Unity displaying a texture's alpha channel incorrectly

I've been troubleshooting a shader for quite some time now, and just now decided to have a closer look at the texture I'm using. And there seems to be quite a difference in how I expected the textures to look like (based on how the renders looked in blender) and how unity displays them (it looks the same when sampling it with a shader btw, so it's propably not related to the preview window). The right part of the screenshot also shows both import and export settings for the image.
Is there some kind of color correction going on here? How could I handle that and get unity to show the original image?
EDIT:
So I've done a lot of testing now and here's some of the stuff I've gotten to know so far:
The color channels blanking out seem to be a bug within unity. This only occurs when all 4 channels have the same values and does not affect the alpha channel. There's also no issue when having only 3 identical color channels (an rgb image).
The change in brightness was caused by blender as well. The values displayed by unity seem to be correct, as verified by a third party webside:
Blender also showed the correct brightness after changing the view transform to raw:
So hopefully these thing are cleared up now, the issue definetely is within blender (or propably more so in how I use it). But that
I am however still experiencing the artifacts I was earlier. You can see here how the red channel (now with some other information) looks before saving:
And here how it does right after loading the saved image into the compositor back again. It seems like what I was blaming on compression earlier is caused by the alpha channel somehow being overlayed onto all the other ones:
All the channels look fine before saving the image (still to a simple rgba 64 bit png), so any help would still be appreciated. Would be glad to know if there's any further information I should provide as well.

How would I go about swapping different transparent images with others in visual basic 6?

So I have a programming project that I have to do for my school. What I have to do is setup a 2 player dice game. I could have gone the easy way and just display the number of the 2 die, but I was thinking of using images that I made in photoshop instead. However, the problem is that I do not know how to change images in an efficient way.
My first option is using the visibility tag on several images laid on top of eachother and change it accordingly as such
image1.visible = false
image2.visible = true
However, I do not think that is very efficient. Images also do not support changing the image with code from my research.
Secondly, I could use a PictureBox instead, which do support changing the image as the program is running. However, it does not support transparency, and the die images are transparent. Plus it gives me the invalid image file error, I guess due to the transparency in the gif files.
There is also the cheap workaround of me making the background of the images the same as the form background.
So is there a more efficient way I am missing out? I know that the cheap workaround would be the best option for this case, but I would like to have this knowledge for future use like semi-transparent pixels that blend in and such.
And before you ask, no, I cannot use another programming language as visual basic 6 is what my school teaches. Thankfully they are changing it soon, but I am stuck with this for now.
Turns out you CAN change the pictures of Images, while keeping transparency and stretch. I am going to properly show it:
Image1.Picture = LoadPicture("YOURPATHHERE.gif")
This is what I get for believing what I've seen on some forum.
Also, the error of invalid image file was due to the images being corrupted for some reason.

How to create a dynamic hover map with changing images

I am building a map navigation for a client and am stuck! I have built the map using and coords and all is fine. However I now need for each region to change colour when you hover over it.
I have done this before with simple sliced-up PNGs in Photoshop but because of the complexity of the map and overlapping items this isn't an option. I don't even know where to start - have tried some different tutorials and a lot of googling but can't find a good solution.
Here is my map so far: http://www.wiredcanvas.com/uploads/map/map.html
I want it so that when you click on a region it will turn a different colour - and I may also add a tooltip if necessary.
Any advice or help as to where I should go from here very gratefully received!!
Thank-you in advance
Alice

Drawing large images for ipad

I am developing an application for viewing images.
I used the example of PhotoScroller Apple to implement this application.
In my application I want to be able to draw on the image.
I had the idea to put a UIView on top with transparent background and draw the lines via touch events. This solution has become very slow because the generated images are very large, around 3700x2000 pixels.
I also tried a solution with the example of Apple GLPaint that uses OpenGL, but it has a size limitation of 2048x2048 pixels.
Anyone have any idea or example of how I implement this?
I think you should try and tile your image.
One option is using CATiledLayer. Have a look at this short tutorial.
Or you could try and use CGContextDrawTiledImage to get your stuff done. Possibly this post from S.O. could help you getting started.

Mirroring a portion of the screen to an external display (in OSX)

I would like to write a program that can mirror a portion of the main display into a new window. Ideally this new window could then be displayed on an external monitor. I have seen this uiltity for a flightsim that does this on a pc (a multifunction display extractor).
CLick here for a screenshot of the program (MFD Extractor)
This would be a live window ie. constantaly updated video display not just a static graphic.
I have looked at screen magnifiers or vnc clients for ideas but I think I need to write something from scratch. I have tried to do some reading on osx programing but where do I start in terms of gaining access to the display? I somehow need to extract the graphics from a particular program. Is it best to go near the final output stage (the individual pixels sent to the display) or somewhere nearer the window management stage.
Any ideas or pointers would be much appreciated. I just need somewhere to start from.
Regards,
There are a few ways to do this:
Quartz Display Services will let you get access to the video memory for a screen.
Quartz Window Services (a.k.a. CGWindow) will let you create an image of everything that lies below a window. If you create a borderless, transparent, empty, high-level window whose frame occupies an entire screen, everything below it will be everything on that screen. (Of course, you could create a smaller window in order to copy a section of the screen.)
There's also a way to do it using OpenGL that I never fully understood. That technique is demonstrated by a couple of code samples, OpenGLScreenSnapshot and OpenGLCaptureToMovie. It's more or less obsoleted by CGWindow, though.
Each of those will get you an image that you can then show or write to a file or something.
To show an image, use NSImageView or IKImageView. If you want to magnify it, IKImageView has a zoomFactor property, but if you want nearest-neighbor scaling (like Pixie, DigitalColor Meter, or xScope), I think you'll need to write a custom view for that (but even that isn't all that hard).

Resources