So I have 3 images
#IBOutlet weak var imageOne: UIImageView!
#IBOutlet weak var imageThree: UIImageView!
#IBOutlet weak var imageTwo: UIImageView!
I generate a random image from an array of images, the random image generated is the "correct" image
Here is how I was currently doing it
let randomIndex = Int(arc4random_uniform(UInt32(famousPeople.count)))
let correct = famousPeople[randomIndex]
self.imageOne.image = UIImage(named: correct)
Obviously the issue with this is the imageOne is always the correct image, I want the correct image to be random either one two or three.
I thought I could do the following
let randomImageIndex = Int(arc4random_uniform(UInt32(famousPeople.count)))
var imageArray: [String] = ["imageOne", "imageTwo", "imageThree"]
let randomImage = imageArray[randomImageIndex]
then
self.randomImage.image = UIImage(named: correct)
However I get the error message view controller no type random image.
Is there a good way to choose a random UI imageview? and then I want to assign to it my randomly chosen "correct" image
Extra context
The user can then choose the "correct" image and gets a message
import Foundation
var number1 = 0
var number2 = 0
var number3 = 0
// You would like randomly choose one of number1, number2 and number2
// and assign to it random number ...
func foo(inout n: [Int]) {
let i = Int(arc4random_uniform(UInt32(n.count)))
for j in 0..<n.count {
if i == j {
n[j] = random()
} else {
n[j] = 0
}
}
}
var arr = [number1, number2, number3]
foo(&arr) // [0, 1804289383, 0]
foo(&arr) // [846930886, 0, 0]
foo(&arr) // [0, 0, 1681692777]
Related
I am building a macOS SwiftUI app. I want to show the world axis such that the user is aware of the orientation of objects. I've looked at the documentation, but the showWorldOrigin debug setting is not available on macOS. Is there an alternative way to show the world axis that I am missing?
While I've found external libraries that create a world axis and add nodes to the scene, I was hoping there was a built-in method to simplify the task and reduce any error.
You can create your own sample of procedural world axis for macOS 3D app.
SwiftUI mac version
import SwiftUI
import SceneKit
struct ContentView : View {
#State private var scene = SCNScene()
#State private var axis = SCNNode()
var options: SceneView.Options = [.allowsCameraControl]
var body: some View {
ZStack {
SceneView(scene: scene, options: options).ignoresSafeArea()
let _ = scene.background.contents = NSColor.black
let _ = createWorldAxis()
let _ = axis.opacity = 0.1 // you can hide world axis
}
}
func createWorldAxis() {
let colors: [NSColor] = [.systemRed, .systemGreen, .systemBlue]
for index in 0...2 {
let box = SCNBox(width: 0.200, height: 0.005,
length: 0.005, chamferRadius: 0.001)
let material = SCNMaterial()
material.lightingModel = .constant
material.diffuse.contents = colors[index]
box.materials[0] = material
let node = SCNNode(geometry: box)
switch index {
case 0:
node.position.x += 0.1
case 1:
node.eulerAngles = SCNVector3(0, 0, Float.pi/2)
node.position.y += 0.1
case 2:
node.eulerAngles = SCNVector3(0, -Float.pi/2, 0)
node.position.z += 0.1
default: break
}
axis.addChildNode(node)
axis.scale = SCNVector3(1.5, 1.5, 1.5)
scene.rootNode.addChildNode(axis)
}
print(axis.position)
}
}
Cocoa version
import Cocoa
import SceneKit
class ViewController : NSViewController {
var axis = SCNNode()
var sceneView = SCNView()
override func viewDidLoad() {
super.viewDidLoad()
sceneView = self.view as! SCNView
sceneView.scene = SCNScene()
sceneView.allowsCameraControl = true
sceneView.backgroundColor = .black
self.createWorldAxis()
axis.opacity = 0.1 // you can hide world axis
}
func createWorldAxis() {
let colors: [NSColor] = [.systemRed, .systemGreen, .systemBlue]
for index in 0...2 {
let box = SCNBox(width: 0.200, height: 0.005,
length: 0.005, chamferRadius: 0.001)
let material = SCNMaterial()
material.lightingModel = .constant
material.diffuse.contents = colors[index]
box.materials[0] = material
let node = SCNNode(geometry: box)
if index == 0 {
node.position.x += 0.1
} else if index == 1 {
node.eulerAngles = SCNVector3(0, 0, Float.pi/2)
node.position.y += 0.1
} else if index == 2 {
node.eulerAngles = SCNVector3(0, -Float.pi/2, 0)
node.position.z += 0.1
}
axis.addChildNode(node)
axis.scale = SCNVector3(1.5, 1.5, 1.5)
sceneView.scene?.rootNode.addChildNode(axis)
}
print(axis.position)
}
}
Posting this answer, as per OP comments...
Apple's docs list the following SCNDebugOptions:
.showPhysicsShapes
.showBoundingBoxes
.showLightInfluences
.showLightExtents
.showPhysicsFields
.showWireframe
.renderAsWireframe
.showSkeletons
.showCreases
.showConstraints
.showCameras
.showFeaturePoints
.showWorldOrigin
Curiously, the last two - .showFeaturePoints and .showWorldOrigin - are not defined in SceneKit. And the discussion notes refer only to ARKit, where they are defined.
The docs for SCNDebugOptions state that these are bit mask patterns ... and if we print them out, we get:
showPhysicsShapes: SCNDebugOptions(rawValue: 1)
showBoundingBoxes: SCNDebugOptions(rawValue: 2)
showLightInfluences: SCNDebugOptions(rawValue: 4)
showLightExtents: SCNDebugOptions(rawValue: 8)
showPhysicsFields: SCNDebugOptions(rawValue: 16)
showWireframe: SCNDebugOptions(rawValue: 32)
renderAsWireframe: SCNDebugOptions(rawValue: 64)
showSkeletons: SCNDebugOptions(rawValue: 128)
showCreases: SCNDebugOptions(rawValue: 256)
showConstraints: SCNDebugOptions(rawValue: 512)
showCameras: SCNDebugOptions(rawValue: 1024)
So... we try this to get "the next one in order" (expecting it to equate to .showFeaturePoints):
sceneView.debugOptions = SCNDebugOptions(rawValue: 2048)
Turns out, that gives us RGB axis indicators ... the .showWorldOrigin.
For a simple scene with an extruded bezier path (a cube), using these options:
sceneView.debugOptions = [.renderAsWireframe, SCNDebugOptions(rawValue: 2048)]
we get this output:
Trying one further, thinking maybe we get .showFeaturePoints:
sceneView.debugOptions = SCNDebugOptions(rawValue: 4096)
doesn't seem to do anything - at least, I don't see any visual change in my simple Scene.
I found what i wanted using the UI. First, ensure you enable controls for your scene:
myscene.showsStatistics = true
Then, click the configuration button on the bottom of your screen.
In the options dropdown select World Origin.
I am puzzled why all debug options can be invoked programmatically except the World Origin. None the less, that allows you to see axis.
I am trying to implement a semantic segmentation model in my application. I have been able to convert the u2net model to a CoreML model. I am unable to get a workable result from the MLMultiArray output. The specification description is as follows:
input {
name: "input"
type {
imageType {
width: 512
height: 512
colorSpace: RGB
}
}
}
output {
name: "x_1"
type {
multiArrayType {
shape: 1
shape: 3
shape: 512
shape: 512
dataType: FLOAT32
}
}
}
The model works great when opening it and using the model preview functionality in Xcode. It shows the 2 different labels in 2 colours (there are only 2 classes + 1 background). I want to have the same output in my application, however when I manually process the MLMultiArray output to a CGImage I get different results. I am using the code provided here like this:
let image = output.cgImage(min: -1, max: 1, channel: 0, axes: (1,2,3))
This gives me something that looks somewhat usable but it has a lot of gradient within each channel. What I need is an image with simply 1 color value for each label.
I have tried converting the output of the model directly to an image through this sample code. This simply shows 'Inference Failed' in the Xcode model preview. When I try removing the unnecessary extra dimension in the MultiArray output I get this error:
"Error reading protobuf spec. validator error: Layer 'x_1' of type 'Convolution' has output rank 3 but expects rank at least 4."
What does the model preview in Xcode do what I am not doing? Is there a post processing step I need to take to get usable output?
Answering my own question:
Turns out the resulting pixels for each channel represent the possibility of it being the class represented by that channel.
In other words find the maximum pixel value at a certain position. The channel with the highest value is the class pixel.
func getLabelsForImage() {
....
setup model here
....
guard let output = try? model.prediction(input: input) else {
fatalError("Could not generate model output.")
}
let channelCount = 10
// Ugly, I know. But works:
let colors = [NSColor.red.usingColorSpace(.sRGB)!, NSColor.blue.usingColorSpace(.sRGB)!, NSColor.green.usingColorSpace(.sRGB)!, NSColor.gray.usingColorSpace(.sRGB)!, NSColor.yellow.usingColorSpace(.sRGB)!, NSColor.purple.usingColorSpace(.sRGB)!, NSColor.cyan.usingColorSpace(.sRGB)!, NSColor.orange.usingColorSpace(.sRGB)!, NSColor.brown.usingColorSpace(.sRGB)!, NSColor.magenta.usingColorSpace(.sRGB)!]
// I don't know my min and max output, -64 and 64 seems to work OK for my data.
var firstData = output.toRawBytes(min: Float32(-64), max: Float32(64), channel: 0, axes: (0,1,2))!.bytes
var outputImageData:[UInt8] = []
for _ in 0..<firstData.count {
let r:UInt8 = UInt8(colors[0].redComponent * 255)
let g:UInt8 = UInt8(colors[0].greenComponent * 255)
let b:UInt8 = UInt8(colors[0].blueComponent * 255)
let a:UInt8 = UInt8(colors[0].alphaComponent * 255)
outputImageData.append(r)
outputImageData.append(g)
outputImageData.append(b)
outputImageData.append(a)
}
for i in 1..<channelCount {
let data = output.toRawBytes(min: Float32(-64), max: Float32(64), channel: i, axes: (0,1,2))!.bytes
for j in 0..<data.count {
if data[j] > firstData[j] {
firstData[j] = data[j]
let r:UInt8 = UInt8(colors[i].redComponent * 255)
let g:UInt8 = UInt8(colors[i].greenComponent * 255)
let b:UInt8 = UInt8(colors[i].blueComponent * 255)
let a:UInt8 = UInt8(colors[i].alphaComponent * 255)
outputImageData[j*4] = r
outputImageData[j*4+1] = g
outputImageData[j*4+2] = b
outputImageData[j*4+3] = a
}
}
}
let image = imageFromPixels(pixels: outputImageData, width: 512, height: 512)
image.writeJPG(toURL: labelURL.deletingLastPathComponent().appendingPathComponent("labels.jpg"))
}
// I found this function here: https://stackoverflow.com/questions/38590323/obtain-nsimage-from-pixel-array-problems-swift
func imageFromPixels(pixels: UnsafePointer<UInt8>, width: Int, height: Int)-> NSImage { //No need to pass another CGImage
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo:CGBitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue)
let bitsPerComponent = 8 //number of bits in UInt8
let bitsPerPixel = 4 * bitsPerComponent //ARGB uses 4 components
let bytesPerRow = bitsPerPixel * width / 8 // bitsPerRow / 8 (in some cases, you need some paddings)
let providerRef = CGDataProvider(
data: NSData(bytes: pixels, length: height * bytesPerRow) //Do not put `&` as pixels is already an `UnsafePointer`
)
let cgim = CGImage(
width: width,
height: height,
bitsPerComponent: bitsPerComponent,
bitsPerPixel: bitsPerPixel,
bytesPerRow: bytesPerRow, //->not bits
space: rgbColorSpace,
bitmapInfo: bitmapInfo,
provider: providerRef!,
decode: nil,
shouldInterpolate: true,
intent: CGColorRenderingIntent.defaultIntent
)
return NSImage(cgImage: cgim!, size: NSSize(width: width, height: height))
}
I want to draw a square, with NSBezierPath. The border of square must to be discontinue so i use dashStyle, but I don't have any control on numbers of segments created.
On Apple documentation the explanation is a bit vague. They said that "When setting a line dash pattern, you specify the width (in points) of each successive solid or transparent swatch".
So i think, I need a way to get the length of a curved bezier.
Does anyone have an idea how i can achive that ?
extension NSBezierPath {
var lenght:Double {
get{
let flattenedPath = self.bezierPathByFlatteningPath
let segments = flattenedPath.elementCount
var lastPoint:NSPoint = NSZeroPoint
var point:NSPoint = NSZeroPoint
var size :Double = 0
for i in 0...segments - 1 {
let e:NSBezierPathElement = flattenedPath.elementAtIndex(i, associatedPoints: &point)
if e == .MoveToBezierPathElement {
lastPoint = point
} else {
let distance:Double = sqrt(pow(Double(point.x - lastPoint.x) , 2) + pow(Double(point.y - lastPoint.y) , 2))
size += distance
lastPoint = point
}
}
return size
}
}
}
With this extension i get the "approximative" length of a bezier. After that, everything is simple:
let myPath = NSBezierPath(roundedRect:myRect, xRadius:50, yRadius:50)
let pattern = myPath.length / (numbersOfSegments * 2) // we divide the length to double of segments we need.
myPath.setLineDash([CGFloat(pattern),CGFloat(pattern)], countL:2 , phase: 0)
myPath.stroke()
This question already has answers here:
Randomly choosing an item from a Swift array without repeating
(6 answers)
Closed 6 years ago.
I have a function that will randomly output a SKColor.
func getRandomColor() -> SKColor{
let randomaval = arc4random_uniform(4)
var color = SKColor()
switch(randomaval)
{
case 0:
color = redColor
case 1:
color = greenColor
case 2:
color = blueColor
case 3:
color = yellowColor
default:()
}
return color
}
When two bodies collide I call this function to change colors
aball.color = getRandomColor()
if aball.color == redColor && getRandomColor() == redColor {
aball.color = getRandomColor() //to set the color to something other than red
aball.colorBlendFactor = 1.0
}
What I want to do is that, when I say aball.color = getRandomColor(), if getRandomColor() is redColor again, it needs to run the if statement again till the function returns something other than redColor. Most of the time, when my if condition is true, it calls redColor again and I can't understand how to avoid that. I basically want a different color to be returned everytime getRandomColor is called. How do I accomplish that?
How about repeatedly calling your getRandomColor() function until the result is something other than the ball's current color? :
repeat {
let newColor = getRandomColor()
} while newcolor != aball.color
aball.color = newColor
Alternatively, you could re-write getRandomColor to accept a parameter of a color it shouldn't return and then call your amended function with aball.color:
func getRandomNumber(notColor: SKColor) -> SKColor{
repeat {
let randomaval = arc4random_uniform(4)
var color = SKColor()
switch(randomaval)
{
case 0:
color = redColor
case 1:
color = greenColor
case 2:
color = blueColor
case 3:
color = yellowColor
default:()
}
let newColor = getRandomColor()
} while color != notColor
return color
}
Then, when you want to change the color of the ball:
aball.color = getRandomColor(notColor: aball.color)
another approach is to have getRandomColor track its own last result. In your initialisation decalre:
var lastrandomcolor: SKColor
then
func getRandomNumber() -> SKColor{
repeat {
let randomaval = arc4random_uniform(4)
var color = SKColor()
switch(randomaval)
{
case 0:
color = redColor
case 1:
color = greenColor
case 2:
color = blueColor
case 3:
color = yellowColor
default:()
}
let newColor = getRandomColor()
} while color != lastRandomColor
lastRandomcolor = color
return color
}
then just use:
aball.color = getRandomColor()
I prefer the 2nd approach where your pass a color that you don't want returned, as it gives you more control i.e. at certain times, you might not want the ball to be some other colour, not just it's own color. e.g. not blue if its in the sky. you could even pass an array of colours that shouldn't be returned.
In my game, I have a ball, walls and gaps between these walls. Ball and walls are SKSpriteNode, gap is SKNode. I use gap to count the score.
I have the following code to handle the contacts and collisions. I checked plenty of documents and tutorials but cant find why there are multiple gap contacts happen when my ball contacts the gap. It prints "gap contact" for 6 times every time.
Bodytypes definitions:
enum BodyType:UInt32 {
case ball = 1
case wall = 2
case gap = 4
}
Ball:
var ball = SKSpriteNode(texture: SKTexture(imageNamed: "img/ball.png"))
ball.position = CGPointMake(frame.width/2, 50)
ball.physicsBody = SKPhysicsBody(texture: ball.texture, size: ball.size)
ball.physicsBody?.dynamic = true
ball.physicsBody?.allowsRotation = false
ball.physicsBody?.categoryBitMask = BodyType.ball.rawValue
ball.physicsBody?.collisionBitMask = BodyType.wall.rawValue
ball.physicsBody?.contactTestBitMask = BodyType.gap.rawValue | BodyType.wall.rawValue
Walls (walls are created through scheduledTimerWithTimeInterval every 2 seconds):
var wall1 = SKSpriteNode(texture: nil, color: UIColor.blackColor(), size: CGSizeMake(frame.size.width, 20))
var wall2 = SKSpriteNode(texture: nil, color: UIColor.blackColor(), size: CGSizeMake(frame.size.width, 20))
let gapSize = 70
let gapMargin = 50
let gapPoint = CGFloat(arc4random_uniform(UInt32(frame.size.width) - gapSize - (gapMargin*2)) + gapMargin)
wall1.position = CGPointMake(-wall1.size.width/2 + gapPoint, self.frame.size.height)
wall2.position = CGPointMake(wall1.size.width/2 + CGFloat(gapSize) + gapPoint, self.frame.size.height)
wall1.physicsBody = SKPhysicsBody(rectangleOfSize: wall1.size)
wall1.physicsBody?.dynamic = false
wall2.physicsBody = SKPhysicsBody(rectangleOfSize: wall2.size)
wall2.physicsBody?.dynamic = false
wall1.physicsBody?.categoryBitMask = BodyType.wall.rawValue
wall2.physicsBody?.categoryBitMask = BodyType.wall.rawValue
Gap Node:
var gap = SKNode()
gap.physicsBody = SKPhysicsBody(rectangleOfSize: CGSizeMake(CGFloat(gapSize), wall1.size.height))
gap.position = CGPointMake(CGFloat(gapSize)/2 + gapPoint, self.frame.size.height)
gap.runAction(moveAndRemoveWall)
gap.physicsBody?.dynamic = false
gap.physicsBody?.collisionBitMask = 0
gap.physicsBody?.categoryBitMask = BodyType.gap.rawValue
Contact Handling:
func didBeginContact(contact: SKPhysicsContact) {
let contactMask = contact.bodyA.categoryBitMask | contact.bodyB.categoryBitMask
switch(contactMask) {
case BodyType.ball.rawValue | BodyType.gap.rawValue:
println("gap contact")
default:
println("wall contact")
movingObjects.speed = 0
explosion(ball.position)
gameOver = 1
}
}
Wall collision works perfectly and prints "wall contact" only once when I hit the wall but I couldnt fix this multiple contact issue between the ball and gap.
UPDATE: The problem is with the ball's physicsbody. If I use circleOfRadius it works perfectly but then, this doesnt provide a great experience to the player as my ball shape is not a perfect circle so I want to use the texture to cover the physicsbody.