I would like to make a reflection to an image - like this:
Is that possible in Monotouch?
Thanks!
Mojo
It is possible, considering that you have full access to CoreGraphics.
There are too many ways of skinning that cat though.
Say you have the top image in "image", I would do something like:
Create a graphics context
Draw the image
Create a bitmap context for the inverted image, with alpha transparency
Render the image invertd
Render a gradient that has been configured to go from 0.5 opaque to 0.2 opaque
Render that on the bottom of the image
Get an image out of the second context
Draw the extracted image into the first context, inverted.
Related
I am wondering how I would be able to use animated shapes inside a movieclip that would be acting as a mask?
In my Animate CC canvas file I have an instance (stripeMask) that should mask the below instance called mapAnim.
stripeMask contains shapes that are animating in.
So when the function maskIn is called, the playhead should move to the first frame inside the stripeMask clip (the one after frame 0) and animate the mask like so:
function maskIn(){
//maskAnimation to reveal image below
stripeMask.gotoAndPlay(1);
}
I love AnimateCC and it works great, but the need for creating more complex and animated masks is there and it's not easy to achieve unless I am missing something here.
Thanks!
Currently you can only use a Shape as a mask, not a Container or MovieClip.
If you want to do something more complex, you can use something like AlphaMaskFilter, but it has to be cached, and then updated every time the mask OR the content updates:
something.filters = [new createjs.AlphaMaskFilter(stripeMask)];
something cache(0,0,w,h);
// On Change
something.updateCache(); // Re-caches
The source of the AlphaMaskFilter must be an image, so you can either point to a Bitmap image, or a cacheCanvas of a mask clip you have also cached. Note that if the mask changes, the cache has to be updated as well.
This is admittedly not a fantastic solution, and we are working on other options.
I would like to apply a CIFilter to a CGPath. A quick searching around reveals this is fairly straight forward on iOS. What are the options on OS X?
Are the steps,
create a image context,
create CGPath which uses the image context,
apply filter,
draw image into the current graphics context (i.e. for the NSView)?
This seems like a huge amount of boilerplate for a reasonably common task. I just want to check that I have not missed anything!
Core Image operates with image's pixels.
Filters in CoreImage generate CIImage objects and not to change original context. But you can create CIContext to draw to the image context.
You can't apply a filter to the image context directly. But you can create a image from the image context, later you can apply the filter, and blend images together.
I'd like to take an image and use it as a mask for a view on which I add numerous image views. I know of the quartz CGContextClipToMask() call but what would be the best way to approach this? Can I override the drawRect method of a container view, call CGContextClipToMask() within it, and then expect its subviews to adhere to that clipping region? It doesn't seem to work.
Do I need to instead add some blocking mask image over top?
Instead of subclassing or overriding drawing functions, I chose to overlay the images with an image that had transparency in the viewable portion. i.e., if my 'surface' was an image of a parchment, and I aimed to draw a bunch of images on it. I would have the parchment image, then a container UIView for any images to be put on that parchment, then a masking image over top of that which was the original parchment image but with the parchment itself converted instead to full transparency, while the surrounding area is left exactly as the background the parchment is on (then all other UI widgets over top of that).
This seems a viable solution in all cases except if one were to need some image to visually animate around and behind the parchment (not my case).
First of all I will explain my situation so you can know my problem a little better. I'm making a HTML5 app. I have a canvas, and using a color picker you can change the color of the canvas. Now i have a picture which I want to put on the canvas but that pictures color needs to be changed using a color picker. So i need to replace, lets say, black color on that picture and put it on the canvas so it dosnt screw up the background.
So that will look like this:
1st color picker- changes the color of the canvas
2nd color picker - replaces the black color on the image with the one in the color picker and puts it on the canvas
Now my problem is how to replace the color on the image without reloading the page.
My only condition is no using silverlight, flash, java or any other similar tehnology that need 3rd party software to be installed on the device.
Thanks in advance.
If you dont understand my query fully, feel free to ask.
My approach with a JS only solution could be:
Loading the image inside a canvas element. Look at the MDC canvas tutorial
Trigger the user click on the canvas and get the pixel color (see links below to know how to get the color of a pixel) and look at this answer to get the mouse position
Substitute all the colors in the canvas with the one the user pick. For some examples about pixel manipulation:
Pushing pixel with canvas at Mozilla Hacks
http://beej.us/blog/2010/02/html5s-canvas-part-ii-pixel-manipulation/
This JS at mezzoblue apply heavy filter to an image
After some canvas experiment I notice that mostly in all the browser the pixel manipulation with canvas could be very slow also with small images. So another experiment to do could be to get the pixel color and then:
pass the color information to a PHP (or another server side script) with an AJAX call
do the color manipulation with an image library like GD or imagemagik
return back your image with the Ajax response
reload your canvas with the modified version of the image
In Mac OSX,
I have an image with black pixel in all 4 directions.
I want to programmatically crop the image to the maximum image rect.
Should i check for the black pixel and then create the crop rect or is there any supported API is there?
Create an NSImage of the desired size, lock focus on it, draw the desired crop rectangle of the source image into the whole bounds of the destination image, and unlock focus. The image you created now contains the crop from the source image.
Note that this will lose information like resolution (DPI), color profile, and EXIF tags. If you want to preserve those things (probably a good idea), use CGImage:
Use CGImageSource to load the image. Be sure to recover the properties of each image from the file, as well as the images themselves. And note that I used the plural: TIFF files can contain multiple images.
Use the CGImageCreateWithImageInRect function to crop out the desired section of each image. Don't forget to release each original image as appropriate.
If you want to write the cropped-out images to a file, do so using CGImageDestination. Pass both the images and the attributes dictionaries you obtained in step 1.