getting error after processing image. iOS, Swift - image

I am getting this error when i try to process an image and send to Swift OCR.
NSAssert( widthOfImage > 0 && heightOfImage > 0, #"Passed image must not be empty - it should be at least 1px tall and wide");
if I bypass the handle rectangles function and just call the swift OCR function with the first image taken it works fine, but after putting the image through the function processImage it crashes with the above error.
here is my functions.
lazy var rectanglesRequest: VNDetectRectanglesRequest = {
print("Tony 1 Requested....")
return VNDetectRectanglesRequest(completionHandler: self.handleRectangles)
}()
#objc func processImage() {
finalImage = nil
// finalImage = main.correctedImageView.image
guard let uiImage = correctedImageView.image
else { fatalError("no image from image picker") }
guard let ciImage = CIImage(image: uiImage)
else { fatalError("can't create CIImage from UIImage") }
let orientation = CGImagePropertyOrientation(uiImage.imageOrientation)
inputImage = ciImage.oriented(forExifOrientation: Int32(orientation.rawValue))
// Show the image in the UI.
// imageView.image = uiImage
// Run the rectangle detector, which upon completion runs the ML classifier.
let handler = VNImageRequestHandler(ciImage: ciImage, orientation: CGImagePropertyOrientation(rawValue: UInt32(Int32(orientation.rawValue)))!)
DispatchQueue.global(qos: .userInteractive).async {
do {
try handler.perform([self.rectanglesRequest])
} catch {
print(error)
}
}
}
func handleRectangles(request: VNRequest, error: Error?) {
guard let observations = request.results as? [VNRectangleObservation]
else { fatalError("unexpected result type from VNDetectRectanglesRequest") }
guard let detectedRectangle = observations.first else {
// DispatchQueue.main.async {
// self.classificationLabel.text = "No rectangles detected."
// }
return
}
let imageSize = inputImage.extent.size
// Verify detected rectangle is valid.
let boundingBox = detectedRectangle.boundingBox.scaled(to: imageSize)
guard inputImage.extent.contains(boundingBox)
else { print("invalid detected rectangle"); return }
// Rectify the detected image and reduce it to inverted grayscale for applying model.
let topLeft = detectedRectangle.topLeft.scaled(to: imageSize)
let topRight = detectedRectangle.topRight.scaled(to: imageSize)
let bottomLeft = detectedRectangle.bottomLeft.scaled(to: imageSize)
let bottomRight = detectedRectangle.bottomRight.scaled(to: imageSize)
let correctedImage = inputImage
.cropped(to: boundingBox)
.applyingFilter("CIPerspectiveCorrection", parameters: [
"inputTopLeft": CIVector(cgPoint: topLeft),
"inputTopRight": CIVector(cgPoint: topRight),
"inputBottomLeft": CIVector(cgPoint: bottomLeft),
"inputBottomRight": CIVector(cgPoint: bottomRight)
])
// Show the pre-processed image
DispatchQueue.main.async {
print("Tony: 1 adding image")
self.finalImage = UIImage(ciImage: correctedImage)
self.FinalizedImage.image = self.finalImage
// }else {
// print("Tony: No corected image......")
if self.FinalizedImage.image != nil {
print("Tony: 2 Got here to OCR")
self.perform(#selector(self.startOCR), with: nil, afterDelay: 1.0)
}
}
}
with this OCR function
#objc func startOCR() {
print("Tony: OCR called")
if self.FinalizedImage.image != nil {
swiftOCRInstance.recognize(FinalizedImage.image!) {recognizedString in
self.classificationLabel.text = recognizedString
print("Tony: \(recognizedString)")
}
}else {
print("Tony: No image here")
}
}

I was able to figure this out. I had to convert the image from ciImage to cgImage and then back to uiImage. as the ciImage is only data of how the filter will effect the image once it is processed so i needed to solidify it first.
a ciImage var inputImage or make input image = another ciImage you are using
inputImage = correctedImage
// Show the pre-processed image
DispatchQueue.main.async {
print("Tony: 1 adding image")
let cgImage = self.context.createCGImage(self.inputImage, from: self.inputImage.extent)
self.finalImageView.image = UIImage(cgImage: cgImage!)
//Filter Logic
let currentFilter = CIFilter(name: "CISharpenLuminance")
currentFilter!.setValue(CIImage(image: UIImage(cgImage: cgImage!)), forKey: kCIInputImageKey)
let output = currentFilter!.outputImage
let cgimg = self.context.createCGImage(output!, from: output!.extent)
let processedImage = UIImage(cgImage: cgimg!)
self.finalImageView.image = processedImage
// Then start the OCR work using finalImageView.image as the input image of the OCR

Related

Cropping NSImage with onscreen coordinates incorrect on different screen sizes

I'm trying to replicate macOS's screenshot functionality, dragging a selection onscreen to provide coordinates for cropping an image. I have it working fine on my desktop Mac (2560x1600), but testing on my laptop (2016 rMBP 15", 2880x1800), the cropped image is completely wrong. I don't understand why I'd get the right results on my desktop, but not on my laptop. I think it has something to do with the Quarts coordinates being different from Cocoa coordinates, seeing as how on the laptop, the resulting image seems like the coordinates are flipped on the Y-axis.
Here is the code I am using to generate the cropping CGRect:
# Segment used to draw the CAShapeLayer:
private func handleDragging(_ event: NSEvent) {
let mouseLoc = event.locationInWindow
if let point = self.startPoint,
let layer = self.shapeLayer {
let path = CGMutablePath()
path.move(to: point)
path.addLine(to: NSPoint(x: self.startPoint.x, y: mouseLoc.y))
path.addLine(to: mouseLoc)
path.addLine(to: NSPoint(x: mouseLoc.x, y: self.startPoint.y))
path.closeSubpath()
layer.path = path
self.selectionRect = path.boundingBox
}
}
private func startDragging(_ event: NSEvent) {
if let window = self.window,
let contentView = window.contentView,
let layer = contentView.layer,
!self.isDragging {
self.isDragging = true
self.startPoint = window.mouseLocationOutsideOfEventStream
shapeLayer = CAShapeLayer()
shapeLayer.lineWidth = 1.0
shapeLayer.fillColor = NSColor.white.withAlphaComponent(0.5).cgColor
shapeLayer.strokeColor = NSColor.systemGray.cgColor
layer.addSublayer(shapeLayer)
}
}
Then this is the code where I actually generate the screenshot and crop using the CGRect:
public func processResults(_ rect: CGRect) {
if let windowID = self.globalWindow?.windowNumber,
let screen = self.getScreenWithMouse(), rect.width > 5 && rect.height > 5 {
self.delegate?.processingResults()
let cgScreenshot = CGWindowListCreateImage(screen.frame, .optionOnScreenBelowWindow, CGWindowID(windowID), .bestResolution)
var rect2 = rect
rect2.origin.y = NSMaxY(self.getScreenWithMouse()!.frame) - NSMaxY(rect);
if let croppedCGScreenshot = cgScreenshot?.cropping(to: rect2) {
let rep = NSBitmapImageRep(cgImage: croppedCGScreenshot)
let image = NSImage()
image.addRepresentation(rep)
self.showPreviewWindow(image: image)
let requests = [self.getTextRecognitionRequest()]
let imageRequestHandler = VNImageRequestHandler(cgImage: croppedCGScreenshot, orientation: .up, options: [:])
DispatchQueue.global(qos: .userInitiated).async {
do {
try imageRequestHandler.perform(requests)
} catch let error {
print("Error: \(error)")
}
}
DispatchQueue.main.asyncAfter(deadline: .now() + 5.0) {
self.hidePreviewWindow()
}
}
}
self.globalWindow = nil
}
Not 15 minutes after I asked this question, I tried one more thing and it works!
Relevant snippet:
var correctedRect = rect
// Set the Y origin properly (counteracting the flipped Y-axis)
correctedRect.origin.y = screen.frame.height - rect.origin.y - rect.height;
// Checks if we're on another screen
if (screen.frame.origin.y < 0) {
correctedRect.origin.y = correctedRect.origin.y - screen.frame.origin.y
}
// Finally, correct the x origin (if we're on another screen, the origin will be larger than zero)
correctedRect.origin.x = correctedRect.origin.x + screen.frame.origin.x
// Generate the screenshot inside the requested rect
let cgScreenshot = CGWindowListCreateImage(correctedRect, .optionOnScreenBelowWindow, CGWindowID(windowID), .bestResolution)

How to merge 3 images(100x100px) to one new large (300x100px) in Swift

iam very new to swiftUI and dont found a way.
I want to create one image (300x100) dyncmicaly by merging three single 100x100 images horizontaly
ImageA(100x100) + ImageB(100x100) + ImageC(100x100) = ImageD(300x100)
Found a way to show them in a HStack but how can is get one new image to send the Date to new function
Regards
Alex
thanks a lot, I tried to use your function but got an error here "Cannot convert value of type 'UIImage?' to expected element type 'UIImage'!
Code should drat 3x zero. in the number-0.png ist just one lagre zero
//
// ContentView.swift
// imagecombine
//
// Created by Alex on 02.08.20.
// Copyright © 2020 Alex. All rights reserved.
//
import SwiftUI
func combineHorizontally(_ images: [UIImage]) -> UIImage? {
guard !images.isEmpty else { return nil }
var size = CGSize.zero
var scale = CGFloat.zero
for image in images {
scale = max(scale, image.scale)
size = CGSize(width: size.width + image.size.width, height: max(size.height, image.size.height))
}
var position = CGPoint.zero
let format = UIGraphicsImageRendererFormat()
format.scale = scale
return UIGraphicsImageRenderer(size: size, format: format).image { context in
for image in images {
image.draw(at: position)
position.x += image.size.width
}
}
}
let redImage = UIImage(named: "number-0.png")
let greenImage = UIImage(named: "number-0.png")
let blueImage = UIImage(named: "number-0.png")
let image = combineHorizontally([redImage, greenImage, blueImage])
struct ContentView: View {
var body: some View {
Image(image)
}
}
struct ContentView_Previews: PreviewProvider {
static var previews: some View {
ContentView()
}
}
You can use UIImageGraphicsRenderer and draw the images one after another:
func combineHorizontally(_ images: [UIImage]) -> UIImage? {
guard !images.isEmpty else { return nil }
var size = CGSize.zero
var scale = CGFloat.zero
for image in images {
scale = max(scale, image.scale)
size = CGSize(width: size.width + image.size.width, height: max(size.height, image.size.height))
}
var position = CGPoint.zero
let format = UIGraphicsImageRendererFormat()
format.scale = scale
return UIGraphicsImageRenderer(size: size, format: format).image { _ in
for image in images {
image.draw(at: position)
position.x += image.size.width
}
}
}
You said that there were only images and they were 100 × 100, but the above should work regardless of the number and size (memory permitting, of course).
Anyway, this
let image = combineHorizontally([redImage, greenImage, blueImage])
Results in:
To use that in a context where you don’t want an optional, you can use ! forced unwrapping operator, ?? nil coalescing operator, or some other unwrapping pattern, e.g. guard let, if let, etc.
Alternatively, if you don’t want to deal with optionals at all, you can write a rendition that doesn’t return an optional at all (but also doesn’t detect the error scenario where an empty array was provided):
func combineHorizontally(_ images: [UIImage]) -> UIImage {
var size = CGSize.zero
var scale: CGFloat = 1
for image in images {
scale = max(scale, image.scale)
size = CGSize(width: size.width + image.size.width, height: max(size.height, image.size.height))
}
var position = CGPoint.zero
let format = UIGraphicsImageRendererFormat()
format.scale = scale
return UIGraphicsImageRenderer(size: size, format: format).image { _ in
for image in images {
image.draw(at: position)
position.x += image.size.width
}
}
}

UIImageOrientation documentation is incorrect

Looking through Apple's documentation for UIImageOrientation I notice that the images that go with the descriptions are incorrect.
https://developer.apple.com/reference/uikit/uiimageorientation
This has been painful for me, so I'm going to leave this here, with the correct images in the answer in case others find the same.
If people think this shouldn't be here, please comment / vote down and I'll remove.
Here's how I got the correct images:
extension UIImage {
var normalised: UIImage {
if imageOrientation == .up {
return self
}
var normalisedImage: UIImage
let format = UIGraphicsImageRendererFormat.default()
format.scale = scale
format.opaque = true
format.prefersExtendedRange = false
normalisedImage = UIGraphicsImageRenderer(size: size, format: format).image { _ in
draw(in: CGRect(origin: .zero, size: size))
}
return normalisedImage
}
func translated(to orientation: UIImageOrientation) -> UIImage {
guard let cgImage = cgImage else {
return self
}
return UIImage(cgImage: cgImage, scale: 1, orientation: orientation).normalised
}
}
Then using this image as the "base"

Issues with converting swift 2.3 to swift 3

here is the code which changes image pixels from one to another color
class func processPixelsInImage(inputImage: UIImage,defaultColor:RGBA32,filledColor:RGBA32,currentValue:CGFloat,maxValue:CGFloat) -> UIImage? {
guard let context = CGBitmapContextCreate(nil, width, height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo) else {
print("unable to create context")
return nil
}
CGContextDrawImage(context, CGRectMake(0, 0, CGFloat(width), CGFloat(height)), inputCGImage)
let pixelBuffer = UnsafeMutablePointer<RGBA32>(CGBitmapContextGetData(context))
var currentPixel = pixelBuffer
let prevPixel = pixelBuffer
for i in 0 ..< Int(CGFloat(width)/maxValue*currentValue) {
for j in 0 ..< Int(height) {
if currentPixel.memory.color != 0 {
if currentValue == maxValue {
currentPixel.memory = defaultColor
}else {
currentPixel.memory = filledColor
}
}
currentPixel = prevPixel
currentPixel += (i+width*j)
}
}
let outputCGImage = CGBitmapContextCreateImage(context)
let outputImage = UIImage(CGImage: outputCGImage!, scale: inputImage.scale, orientation: inputImage.imageOrientation)
return outputImage
}
}
When I am trying to convert it to swift 3
let pixelBuffer = UnsafeMutablePointer<RGBA32>(CGBitmapContextGetData(context))
on this line I have to change UnsafeMutablePointer to be
let pixelBuffer = UnsafeRawPointer(context.data)
and because of that in currentPixel object memory property is not found
how can I get pixel memory?
The memory property has been replaced with the pointee property.
Source: https://swift.org/migration-guide/

xcode swift - sending vector image to server

is it possible to send an UIImage in some vector format to an php server, and how would it be done? It's important that the image would be in some vector format.
I have created an app where you can draw lines, I would like to save those drawn lines to an image in some vector format. I say some, becouse I'm not sure what format is the best.
I just need to be able to save to it and read it.
Here is my code for drawing.
#IBAction func panning(sender: AnyObject) {
pan.maximumNumberOfTouches = 1;
pan.minimumNumberOfTouches = 1;
var currentPoint:CGPoint = pan.locationInView(self);
var midPoint:CGPoint = midpoint(previousPoint, p1: currentPoint);
if(pan.state == UIGestureRecognizerState.Began){
path.moveToPoint(currentPoint);
} else if (pan.state == UIGestureRecognizerState.Changed){
path.addQuadCurveToPoint(midPoint, controlPoint: previousPoint);
}
path.lineCapStyle = kCGLineCapRound;
path.lineWidth = 10;
previousPoint = currentPoint;
self.setNeedsDisplay();
}
override func drawRect(rect: CGRect) {
//println(pan.velocityInView(self));
UIColor.redColor().setStroke();
path.stroke();
}
#IBAction func sendImage(sender: AnyObject) {
UIGraphicsBeginImageContext(self.bounds.size);
var ctx:CGContextRef = UIGraphicsGetCurrentContext();
self.layer.renderInContext(ctx);
var saveImg = UIGraphicsGetImageFromCurrentImageContext();
UIImageWriteToSavedPhotosAlbum(saveImg, self, Selector("image:didFinishSavingWithError:contextInfo:"), nil);
UIGraphicsEndImageContext();
}
func image(image: UIImage, didFinishSavingWithError error: NSErrorPointer, contextInfo: UnsafePointer<()>) {
dispatch_async(dispatch_get_main_queue(), {
UIAlertView(title: "Success", message: "This image has been saved to your Camera Roll successfully", delegate: nil, cancelButtonTitle: "Close").show()
})
}
func midpoint(p0:CGPoint, p1:CGPoint)->CGPoint{
var retCG:CGPoint = CGPoint(x: (p0.x + p1.x) / 2.0, y: (p0.y + p1.y) / 2.0);
return retCG;
}
Thanks.

Resources