So in swift, for iOS devices (though cross platform would be cool if I can still code in Xcode) I am trying to do something rather simple.
All I want to do is have little particles moving around on the screen leaving trails. Unfortunately I cant use traditional particle systems because I need to program their movement.
Anyway I was figuring that if I could keep all the pixels in a large array and just changing them as I need to. That way I can set pixels based on the particle, and just fade them away each frame, thus giving the illusion of a trail.
What would be a good way to do this? I have been looking at quarts, core graphics, and opengles however I cant find a single tutorial that tells me how to draw a single pixel... Just ones that tell how to draw lines and other stuff. I just need to be able to draw the screen pixel by pixel (unless you have a better idea)
What framework should I use
How do I draw a single pixel in it given an x,y screen coordinate (or link a tutorial).
Thanks much!
CoreImage with CIImageAccumulator can help you do this. In my FurrySketch project, I do something very similar. In a nutshell:
Begin a UI Graphics Context: UIGraphicsBeginImageContext(view.frame.size)
Use functions such as CGContextAddLineToPoint to draw to the context
Create a CIImage of the drawing: let drawnImage = UIGraphicsGetImageFromCurrentImageContext()
Use a CISourceOverCompositing to composite that new image over the previous one in the accumulator and write the composite image back to the accumulator:
compositeFilter.setValue(CIImage(image: drawnImage),
forKey: kCIInputImageKey)
compositeFilter.setValue(imageAccumulator.image(),
forKey: kCIInputBackgroundImageKey)
imageAccumulator.setImage(compositeFilter.valueForKey(kCIOutputImageKey) as! CIImage)
Display the new composite in the UI: imageView.image = UIImage(CIImage: imageAccumulator.image())
By also adding a blur to that image, you can get your particles to fade out too, as I've done here.
Related
I have an image that is generated as an NSImage, and then I draw it into my NSView subclass, using a custom draw() method.
I want to modify this custom view so that the image is drawn in the same place, but it fades out on the side. That is, it's drawn as a linear gradient, from alpha=1.0 to alpha=0.0.
My best guess is one of the draw() variants with NSCompositingOperation might help me do what I want, but I'm having trouble understanding how they could do this. I'm not a graphics expert, and the NSImage and NSCompositingOperation docs seem to be using different terminology.
The quick version: pretty much this question but on macOS instead of Android.
You're on the right track. You'll want to use NSCompositingOperationDestinationOut to achieve this. That will effectively punch out the destination based on the alpha of the source being drawn. So if you first draw your image and then draw a gradient from alpha 0.0 to 1.0 with the .destinationOut operation on top, you'll end up with your image with alpha 1.0 to 0.0
Because that punch out happens to whatever is already in the backing store where the gradient is being drawn to, you'll want to be careful where/how you use it.
If you want to do this all within the drawing of the view, you should do the drawing of your image and the punch out gradient within a transparency layer using CGContextBeginTransparencyLayer and CGContextEndTransparencyLayer to prevent punching out anything else.
You could also first create a new NSImage to represent this faded variant, either using the drawingHandler constructor or NSCustomImageRep and doing the same image & gradient drawing within there (without worrying about transparency layers). And then draw that image into your view, or simply use that image with an NSImageView.
I'm looking for some advice on how to proceed.
I'm working on a cocoa program (Objective-C) where I want to be able to draw over top of a bitmap image, defining areas that I can use to get information from the underlying image.
As an example, I'd like to create a box (or oval) and be able to get the average pixel value from the underlying image. Ultimately I want to designate a number of such regions where I am sampling the underlying image to provide various statistics.
Currently I'm using an NSImage class to draw my image but I'm not sure how to go about drawing an NSBezierPath over that image. Would I be better off using something other than NSImage?
Do I simply override the NSImage drawRect method so that it draws a series of NSBezierPath objects?
I would like to be able to save these outlined regions as a layer so that they are available in the future.
You can use a CGBitmapContext (for the bitmap), CGImageMasks (for the masking), and CGPaths or CGContext* drawing primitives for the lines and curves.
A complete answer would be quite long, but that gives you a starting point.
I am developing a Map App for our school. Our school provide me its own map image and coordinate information. So I want use my map image as the source of map and accord to user's location to show a point in the map image. Can anybody gives me some advice?
Thanks in advance.
There are 2 ways:
It is possible to change the source of the map-tiles (e.g. from Bing to say Nokia or Google) of the Map Control. However, for this to work, it is important that map-tiles source implements mechanisms like quadkeys (e.g. see this). Therefore, to answer your question if you would like to use the Bing Map Control with your school's map so that you can leverage the positioning features of the control, it would require that you have a map-tile server properly designed in order to achieve this. AND, there might be some legal issue with altering the Bing Map control if i am not mistaken.
However, given that you are suggesting an image of the map and then doing positioning, then i would suggest that it can be as easy as you calibrating the pixel X-Y coordinate system on the map with that of the geo-coordinate provided by the geo-watcher. Then, in your code you could do a simple mapping between these 2 systems and then draw something on top of the image. For this part you could use a writeablebitmap or simply use the fact that you can overlay UI controls with silverlight. So, for the latter have a canvas with the an image of the map of your school and then on top of that canvas you can have an <image> representing the device and change its top-left coordinate wrt to the canvas.
So, in summary, as the geo-watcher gives geo x-y coordinates to your code, there is mapping function to the pixel X-Y (which you have pre-calculated) and use that XY to position an overlay <image> or draw some "pin" on a writeablebitmap where you have previously draw the image of the map of your school. Things get complicated with this approach when you want to have zooming as well but, this solution is easily scalable.
Does this help clear things a bit?
Answering 2nd question in comment below:
Yes you can zoom in and out of the canvas but, you would have to program it yourself. The control itself, the canvas does not have this capability. Hence, you would have to recognize the triggers for a zoom action (e.g. clicking on the (+) or (-) buttons or, pinch and stretch gestures) and react to that by re-drawing on the canvas a portion of the region on the canvas so that now that regions stretches over the entire canvas. That is, zooming. For instance for the zoom in case: you would have to determine a geometrical area which corresponds to the zoom factor and is in ratio to the dimensions of the canvas object. Then, you would have to scale that portion up so that edges and empty spaces representing walls and spaces between them grow proportionately. Also, you have to determine the center point of that region which your fix on the canvas so that everything grows away from it. Hence, you would be achieving a appropriate zooming effect. At this point you would have to re-adjust your mapping function of geo-coordinates to pixel XY so that the "pin" or object of interest can be drawn with precision and accurately on the newly rendered surface.
I understand that this can appear quite intensive but, it is straightforward once you appreciate for yourself the mechanics of what is required.
Another easier option could be to use SVG (Scalable Vector Graphics) in a Web-Browser control. Note that you would still require the geo-coordinate to pixel-xy system. However, with this approach you can get the zooming for free with the combination of SVG (which have transformation capabilities for the scale up and down operations) and Web-Browser which enables you to render the SVG and does the gesture handling of zooming in to the map. For that, i believe that the cost of work would be in re-creating the map of your school which is in bitmap to SVG. There are tools like Inkscape which you can use to load the image of your map and then trace the outlines over it. You can then save that outline document as an SVG. In fact, i would recommend this approach to your problem before tackling the Canvas method as i feel that it would be the easiest path for your needs.
I am using LibGDX for a small app project, and I need to somehow take a series of sprites and place them (or their pixels rather) into a Pixmap. The basic idea is to take random sprites that are generated through various means while the app is running, and, only at specific times, merge some of them onto a single background sprite.
I believe that most of this can be done easily, but the step of getting the sprite images into the Pixmap isn't quite so obvious to me. The sprites also have various transparent and semi-transparent pixels, so simply grabbing the color at each pixel while it is all on the same screen isn't really applicable either, as it obviously shouldn't take the background colors with it.
If there is a suitable alternative to this that would accomplish what I am looking for I would also love to hear it. Any help is highly appreciated.
I think you want to render your sprites to an off-screen buffer (called an "FBO" or FrameBuffer in libgdx) (blending them as they're added), and then render that offscreen buffer to the screen as a single draw call? If so, this question should help: libgdx SpriteBatch render to texture
This requires OpenGL ES 2.0, which will eliminate support for some older devices.
I couldn't post the image, but I use the "CGContextDrawRadialGradient" method to draw a shaded blue ball (~40 pixel diameter), it's shadow and to make a "pulsing" white ring around the ball (inner and outer gradients on the ring). The ring starts at the edges of the blue ball and expands outward (radius grows with a timer). The white ring fades as it expands outward like a radio wave.
Looks great running in the simulator but runs incredibly slow on the iPhone 4. The ring should pulse in about a second (as in simulator), but takes 15-20 seconds on the phone. I have been reading a little about CALayer, CGLayer and reading some segments on a some gradient animation, but it isn't clear what I should be using for best performance.
How do I speed this up. Should I put the ball on a layer and the expanding ring on another layer? If so, how do I know which layer to update on a drawrect?
Appreciate any guidance. Thanks.
The only way to speed something like that up is to pre-render it. Determine how many image frames you need to make it look good and then draw each frame into a context you created with CGBitmapContextCreate and capture the image using CGBitmapContextCreateImage. Probably the easiest way to animate the images would be to set the animationImages property of a UIImageView (although there are other options, like CALayer animations).
The newest Apple docs finally mention which pixel formats are supported in iOS so make sure to reference those when creating your bitmap context.