I create a macOS single window application and add a Sprite Particle System file with template Stars. and the visual effect just like this:
And I want to add it to my viewController, as the guide of this answer, I got the result like this, and it was not which I desired:
override func viewDidLoad() {
super.viewDidLoad()
let scene = SCNScene()
let particlesNode = SCNNode()
let particleSystem = SCNParticleSystem(named: "Welcome", inDirectory: "")
particlesNode.addParticleSystem(particleSystem!)
scene.rootNode.addChildNode(particlesNode)
skView.backgroundColor = .black
skView.scene = scene
}
So, I'm wondering what's wrong and what should I do?
Here is the demo repo: Link Here
The particle system itself is the standard "star" SceneKit particle system available in Xcode, with no changes.
Well I made a little progress. If I swivel the camera around 180 degrees, I can see the stars receding, so we can tell that the particle system is running ok. In the default orientation, though, all I saw was blinking lights. So I think the particles are being generated with a Z position of 0, the same as the camera's.
If I move the system's node away from the camera
particlesNode.position = SCNVector3(0, 0, -20)
I still just see blinking lights. But if I click on the SCNView, the animation works correctly. I see stars coming at me.
I don't understand why I have to click the view to get it to work right. I tried isPlaying = true but that made no difference.
Related
When I finish playing an animation with camera movement, I want the cameraContols to pick up where the clamped animation left off. But everything I have tried so results in the camera jumping to a different location. (The lookAt position seems to be OK.)
I have tried capturing the animeCamera’s attributes and resetting them after replacing the controls’ .camera - but no success.
Any suggestions or examples to look at?
AnimationMixer = new THREE.AnimationMixer(gltf.scene);
var that = this;
AnimationMixer.addEventListener('finished', function(e){
// replace default camera with animation camera
that.controls.camera = animeCamera;
that.controls.update();
that.controls.enabled = true;
});
I'm trying to programmatically resize macOS windows. Similar to Rectangle.
I have the basic resizing code working, for example, move the window to the right half, and when there is only one screen it works fine, however when I try to resize with two screens (in a vertical layout) the math does not work:
public func moveRight() {
guard let frontmostWindowElement = AccessibilityElement.frontmostWindow()
else {
NSSound.beep()
return
}
let screens = screenDetector.detectScreens(using: frontmostWindowElement)
guard let usableScreens = screens else {
NSSound.beep()
print("Unable to obtain usable screens")
return
}
let screenFrame = usableScreens.currentScreen.adjustedVisibleFrame
print("Visible frame of current screen \(usableScreens.visibleFrameOfCurrentScreen)")
let halfPosition = CGPoint(x: screenFrame.origin.x + screenFrame.width / 2, y: -screenFrame.origin.y)
let halfSize = CGSize(width: screenFrame.width / 2, height: screenFrame.height)
frontmostWindowElement.set(size: halfSize)
frontmostWindowElement.set(position: halfPosition)
frontmostWindowElement.set(size: halfSize)
print("movedWindowRect \(frontmostWindowElement.rectOfElement())")
}
If my window is on the main screen then the resizing works correctly, however if it is a screen below (#3 in the diagram below) then the Y coordinate ends up in the top monitor (#2 or #1 depending on x coordinate) instead of the original one.
The output of the code:
Visible frame of current screen (679.0, -800.0, 1280.0, 775.0)
Raw Frame (679.0, -800.0, 1280.0, 800.0)
movedWindowRect (1319.0, 25.0, 640.0, 775.0)
As far as I can see the problem lies in how Screens and windows are positioned:
I'm trying to understand how should I position the window so that it remains in the correct screen (#3), but having no luck so far, there doesn't seem to be any method to get the absolute screen dimensions to place the screen in the correct origin.
Any idea how can this be solved?
I figured it out, I completely missed one of the functions used in the AccessibilityElement class:
static func normalizeCoordinatesOf(_ rect: CGRect) -> CGRect {
var normalizedRect = rect
let frameOfScreenWithMenuBar = NSScreen.screens[0].frame as CGRect
normalizedRect.origin.y = frameOfScreenWithMenuBar.height - rect.maxY
return normalizedRect
}
Basically, since everything is calculated based on the main screen then there is no other option than to take the coordinates of that one and then offset to get the real position of the screen element.
I'm rather new to threejs, so what I'm doing might not be the most efficient way.
I have an object in AR on a mobile device and I want to know if I intersect with it when touching on the screen.
I use the following code to generate the raycast, and it works initally.
const tempMatrix = new THREE.Matrix4();
tempMatrix.identity().extractRotation(this.controller.matrixWorld);
this.raycaster.ray.origin.setFromMatrixPosition(this.controller.matrixWorld);
this.raycaster.ray.direction.set(0, 0, -1).applyMatrix4(tempMatrix);
However, I have the ability to reposition the object (i.e. reset the position so the object is in front, relative to the current camera direction and position) by moving and rotating the whole scene.
After the repositioning, the raycasting is completely offset and is not casting rays anywhere near where I touch the screen.
Repositioning is done like this (while it works, if there's a better way, let me know!) :
public handleReposition(): void {
const xRotation = Math.abs(this.camera.rotation.x) > Math.PI / 2 ? -Math.PI : 0;
const yRotation = this.camera.rotation.y;
this.scene.rotation.set(xRotation, yRotation, xRotation);
this.scene.position.set(this.camera.position.x, this.camera.position.y, this.camera.position.z);
}
How can I achieve to raycast to the correct new location?
Thanks!
Assuming this.scene is actually the main threejs Scene, it's usually a bad idea to change its rotation or position, since it will affect everything inside the scene, including the controller. I'd suggest moving your object instead, or add your object(s) to a Group and move that.
I am working on an animation app and in each other ViewController I can draw on the image that's being currently shown on the original ImageView. Is there any way to fix this?
This is what exactly is happening. I don't really know where in the code the problem exists.
https://drive.google.com/file/d/1M7qWKMugaqeDjGls3zvVitoRmwpOUJFY/view?usp=sharing
Expected to be able to draw only on the DrawingFrame ViewController. However, I can draw on every single ViewController in my app
The problem would appear to be that your gesture recognizer is still operational, even though you’ve presented another view on top of the current one.
This is a bit unusual. Usually when you present a view like that, the old one (and its gesture recognizers) are removed from the view hierarchy. I’m guessing that you’re just sliding this second view on top of the other. There are a few solutions:
One solution would be to make sure to define this new view such that (a) it accepts user interaction; and (b) write code so that it handles those gestures. That will avoid having the view behind it picking up those gestures.
Another solution is to disable your gesture recognizer recognizer when the menu view is presented, and re-enable it when the menu is dismissed.
The third solution is to change how you present that menu view, making sure you remove the current view from the view hierarchy when you do so. A standard show/present transition generally does this, though, so we may need to see how you’re presenting this menu view to comment further.
That having been said, a few unrelated observations:
you should use UIGraphicsBeginImageContextWithOptions instead of UIGraphicsBeginImageContext;
rather than
if pencil.eraser == true { ... }
you can
if pencil.eraser { ... }
I’d suggest giving the pencil a computed property:
var color: UIColor { return UIColor(red: red, green: green, blue: blue, alpha: opacity) }
Then you can just refer to pencil.color;
property names should start with lowercase letter; and
drawingFrame is a confusing name, IMHO, because it’s not a “frame”, but rather likely a UIImageView. I’d call it drawingImageView or something like that.
Yielding:
func drawLine(from fromPoint: CGPoint, to toPoint: CGPoint) {
guard let pencil = pencil else { return }
//begins current context (and defer the ending of the context)
UIGraphicsBeginImageContextWithOptions(drawingImageView.bounds.size, false, 0)
defer { UIGraphicsEndImageContext() }
//where to draw
drawingImageView.image?.draw(in: drawingImageView.bounds)
//saves context
guard let context = UIGraphicsGetCurrentContext() else { return }
//drawing the line
context.move(to: fromPoint)
context.addLine(to: toPoint)
context.setLineCap(.round)
if pencil.eraser {
//Eraser
context.setBlendMode(.clear)
context.setLineWidth(10)
context.setStrokeColor(UIColor.white.cgColor)
} else {
//opacity, brush width, etc.
context.setBlendMode(.normal)
context.setLineWidth(pencil.pencilWidth)
context.setStrokeColor(pencil.color.cgColor)
}
context.strokePath()
//storing context back into the imageView
drawingImageView.image = UIGraphicsGetImageFromCurrentImageContext()
}
Or, even better, retire UIGraphicsBeginImageContext altogether and use the modern UIGraphicsImageRenderer:
func drawLine(from fromPoint: CGPoint, to toPoint: CGPoint) {
guard let pencil = pencil else { return }
drawingImageView.image = UIGraphicsImageRenderer(size: drawingImageView.bounds.size).image { _ in
drawingImageView.image?.draw(in: drawingImageView.bounds)
let path = UIBezierPath()
path.move(to: fromPoint)
path.addLine(to: toPoint)
path.lineCapStyle = .round
if pencil.eraser {
path.lineWidth = 10
UIColor.white.setStroke()
} else {
path.lineWidth = pencil.pencilWidth
pencil.color.setStroke()
}
path.stroke()
}
}
For more information on UIGraphicsImageRenderer, see the “Drawing off-screen” section of WWDC 2018 Image and Graphics Best Practices.
As an aside, once you get this problem behind you, you might want to revisit this “stroke from point a to point b and re-snapshot” logic to capture an array of points and build a path from a whole series, and don’t re-snapshot ever point, but only after a whole bunch have been added. This snapshotting process is slow and you’re going to find that the UX stutters a bit more than it needs. I personally re-snapshot after 100 points or so (at which point the amount of time to restroke the whole path is slow enough that it’s not much faster than the snapshot process, so if I snapshot and restart the path from where I left off, it then speeds up again).
But you say:
Expected to be able to draw only on the DrawingFrame ViewController. However, I can draw on every single ViewController in my app.
The above should draw only the image of drawingImageView and the stroke from fromPoint to toPoint. Your problem about drawing on “every single ViewController” rests elsewhere. We’d really need to see how precisely you are presenting this menu scene.
I have found a tutorial on parallax scrolling in spritekit using objective-C though I have been trying to port it to swift without much success, very little in fact.
Parallax Scrolling
Does anyone have any other tutorials or methods of doing parallax scrolling in swift.
This is a SUPER simple way of starting a parallax background. WITH SKACTIONS! I am hoping it helps you understand the basics before moving to a harder but more effective way of coding this.
So I'll start with the code that get a background moving and then you try duplicating the code for the foreground or objects you want to put in your scene.
//declare ground picture. If Your putting this image over the top of another image (use a png file).
var groundImage = SKTexture(imageNamed: "background.jpg")
//make your SKActions that will move the image across the screen. this one goes from right to left.
var moveBackground = SKAction.moveByX(-groundImage.size().width, y: 0, duration: NSTimeInterval(0.01 * groundImage.size().width))
//This resets the image to begin again on the right side.
var resetBackGround = SKAction.moveByX(groundImage.size().width, y: 0, duration: 0.0)
//this moves the image run forever and put the action in the correct sequence.
var moveBackgoundForever = SKAction.repeatActionForever(SKAction.sequence([moveBackground, resetBackGround]))
//then run a for loop to make the images line up end to end.
for var i:CGFloat = 0; i<2 + self.frame.size.width / (groundImage.size().width); ++i {
var sprite = SKSpriteNode(texture: groundImage)
sprite.position = CGPointMake(i * sprite.size.width, sprite.size.height / 2)
sprite.runAction(moveBackgoundForever)
self.addChild(sprite)
}
}
//once this is done repeat for a forground or other items but them run at a different speed.
/*make sure your pictures line up visually end to end. Just duplicating this code will NOT work as you will see but it is a starting point. hint. if your using items like simple obstructions then using actions to spawns a function that creates that obstruction maybe a good way to go too. If there are more then two separate parallax objects then using an array for those objects would help performance. There are many ways to handle this so my point is simple: If you can't port it from ObjectiveC then rethink it in Swift. Good luck!