I wonder what's wrong with my code, it doesnt show the particles correctly.
and here's the expected particle
implementation
physicsWorld.contactDelegate = self
self.scene?.backgroundColor = UIColor.blackColor()
self.scene?.size = CGSize(width: 640, height: 1136)
self.addChild(SKEmitterNode(fileNamed: "MagicParticle")!)
You should try to safely unwrap the particle file first, just to make sure it cannot be nil
if let particle = SKEmitterNode(fileNamed: "MagicParticle") {
particle.position = ...
addChild(particle)
}
Its strange thats its not working, looking at your pictures it seems like you do not have a typo.
Did you change the default spark.png in the particle effect?
Try cleaning your project or maybe delete the effect and create it again if it still doesn't work
As a side note, you can delete the 2 words
scene?...
You are already in a SKScene, so self is the scene and therefore you can just say
self.backgroundColor = ...
self.size = ...
or better
backgroundColor = ...
size = ...
As a general good coding practice in swift try to only use "self" when the compiler forces you too. So say
addChild(...)
instead of
self.addChild(...)
I think you should include also the file's extension, like this.
self.addChild(SKEmitterNode(fileNamed: "MagicParticle.sks")!)
Related
I'm not sure if this is the best place to ask this question or not but I feel like I'm struggling to grasp something relatively simple and can't find any decent advice on it in relation to KorGE so if anyone can offer any assistance it would be much appreciated.
This is a a question that can relate to multiple things so I'll use a simple example to demonstrate.
Say I set up a solidRect and I want it to travel along the x coordinate by one on each update I might do something like this:
solidRect(16.0, 16.0, Colors.GREEN) {
x = 0.0;
y = 0.0;
addUpdater {
x += 1.0;
}
}
This will create a solid rectangle in the container of 16px by 16px at the position 0,0 and each update will then add 1 to the x position moving it across the screen. Now what I'd like to do is encapsulate this logic into a class in a separate file so I could do something along the lines of the following:
movingShape() {
x = 0.0,
y = 0.0,
}
My expectation being that this would create the same solid rect visible in the scene, position it a 0, 0 and then the updater would be contained within the class itself, so I can use this logic multiple times. However, whenever I do this either init the solidRect in the class or passing it through in the constructor the updater function fails to fire.
As a longer fully functioning example I'd want something along the lines of this:
suspend fun main() = Korge(width = 512.0, height=512.0, bgcolor = Colors["#2b2b2b"]) {
val sceneContainer = sceneContainer();
sceneContainer.changeTo({MyScene()});
}
class MyScene : Scene() {
override suspend fun SContainer.sceneMain() {
movingShape() {
x = 0.0;
y = 0.0;
}
}
}
How would I implement the movingShape class in this example so it works kind of how I mentioned above?
Once again, sorry, if this seems like a trivial question or could be easily answered through some Kotlin. I'm relatively new to the language mainly coming from a Java / JS / PHP background. Any help would be appreciated here? I've tried to look for a good example of this but so far have turned up empty.
I am working on an animation app and in each other ViewController I can draw on the image that's being currently shown on the original ImageView. Is there any way to fix this?
This is what exactly is happening. I don't really know where in the code the problem exists.
https://drive.google.com/file/d/1M7qWKMugaqeDjGls3zvVitoRmwpOUJFY/view?usp=sharing
Expected to be able to draw only on the DrawingFrame ViewController. However, I can draw on every single ViewController in my app
The problem would appear to be that your gesture recognizer is still operational, even though you’ve presented another view on top of the current one.
This is a bit unusual. Usually when you present a view like that, the old one (and its gesture recognizers) are removed from the view hierarchy. I’m guessing that you’re just sliding this second view on top of the other. There are a few solutions:
One solution would be to make sure to define this new view such that (a) it accepts user interaction; and (b) write code so that it handles those gestures. That will avoid having the view behind it picking up those gestures.
Another solution is to disable your gesture recognizer recognizer when the menu view is presented, and re-enable it when the menu is dismissed.
The third solution is to change how you present that menu view, making sure you remove the current view from the view hierarchy when you do so. A standard show/present transition generally does this, though, so we may need to see how you’re presenting this menu view to comment further.
That having been said, a few unrelated observations:
you should use UIGraphicsBeginImageContextWithOptions instead of UIGraphicsBeginImageContext;
rather than
if pencil.eraser == true { ... }
you can
if pencil.eraser { ... }
I’d suggest giving the pencil a computed property:
var color: UIColor { return UIColor(red: red, green: green, blue: blue, alpha: opacity) }
Then you can just refer to pencil.color;
property names should start with lowercase letter; and
drawingFrame is a confusing name, IMHO, because it’s not a “frame”, but rather likely a UIImageView. I’d call it drawingImageView or something like that.
Yielding:
func drawLine(from fromPoint: CGPoint, to toPoint: CGPoint) {
guard let pencil = pencil else { return }
//begins current context (and defer the ending of the context)
UIGraphicsBeginImageContextWithOptions(drawingImageView.bounds.size, false, 0)
defer { UIGraphicsEndImageContext() }
//where to draw
drawingImageView.image?.draw(in: drawingImageView.bounds)
//saves context
guard let context = UIGraphicsGetCurrentContext() else { return }
//drawing the line
context.move(to: fromPoint)
context.addLine(to: toPoint)
context.setLineCap(.round)
if pencil.eraser {
//Eraser
context.setBlendMode(.clear)
context.setLineWidth(10)
context.setStrokeColor(UIColor.white.cgColor)
} else {
//opacity, brush width, etc.
context.setBlendMode(.normal)
context.setLineWidth(pencil.pencilWidth)
context.setStrokeColor(pencil.color.cgColor)
}
context.strokePath()
//storing context back into the imageView
drawingImageView.image = UIGraphicsGetImageFromCurrentImageContext()
}
Or, even better, retire UIGraphicsBeginImageContext altogether and use the modern UIGraphicsImageRenderer:
func drawLine(from fromPoint: CGPoint, to toPoint: CGPoint) {
guard let pencil = pencil else { return }
drawingImageView.image = UIGraphicsImageRenderer(size: drawingImageView.bounds.size).image { _ in
drawingImageView.image?.draw(in: drawingImageView.bounds)
let path = UIBezierPath()
path.move(to: fromPoint)
path.addLine(to: toPoint)
path.lineCapStyle = .round
if pencil.eraser {
path.lineWidth = 10
UIColor.white.setStroke()
} else {
path.lineWidth = pencil.pencilWidth
pencil.color.setStroke()
}
path.stroke()
}
}
For more information on UIGraphicsImageRenderer, see the “Drawing off-screen” section of WWDC 2018 Image and Graphics Best Practices.
As an aside, once you get this problem behind you, you might want to revisit this “stroke from point a to point b and re-snapshot” logic to capture an array of points and build a path from a whole series, and don’t re-snapshot ever point, but only after a whole bunch have been added. This snapshotting process is slow and you’re going to find that the UX stutters a bit more than it needs. I personally re-snapshot after 100 points or so (at which point the amount of time to restroke the whole path is slow enough that it’s not much faster than the snapshot process, so if I snapshot and restart the path from where I left off, it then speeds up again).
But you say:
Expected to be able to draw only on the DrawingFrame ViewController. However, I can draw on every single ViewController in my app.
The above should draw only the image of drawingImageView and the stroke from fromPoint to toPoint. Your problem about drawing on “every single ViewController” rests elsewhere. We’d really need to see how precisely you are presenting this menu scene.
I create a macOS single window application and add a Sprite Particle System file with template Stars. and the visual effect just like this:
And I want to add it to my viewController, as the guide of this answer, I got the result like this, and it was not which I desired:
override func viewDidLoad() {
super.viewDidLoad()
let scene = SCNScene()
let particlesNode = SCNNode()
let particleSystem = SCNParticleSystem(named: "Welcome", inDirectory: "")
particlesNode.addParticleSystem(particleSystem!)
scene.rootNode.addChildNode(particlesNode)
skView.backgroundColor = .black
skView.scene = scene
}
So, I'm wondering what's wrong and what should I do?
Here is the demo repo: Link Here
The particle system itself is the standard "star" SceneKit particle system available in Xcode, with no changes.
Well I made a little progress. If I swivel the camera around 180 degrees, I can see the stars receding, so we can tell that the particle system is running ok. In the default orientation, though, all I saw was blinking lights. So I think the particles are being generated with a Z position of 0, the same as the camera's.
If I move the system's node away from the camera
particlesNode.position = SCNVector3(0, 0, -20)
I still just see blinking lights. But if I click on the SCNView, the animation works correctly. I see stars coming at me.
I don't understand why I have to click the view to get it to work right. I tried isPlaying = true but that made no difference.
I have found a tutorial on parallax scrolling in spritekit using objective-C though I have been trying to port it to swift without much success, very little in fact.
Parallax Scrolling
Does anyone have any other tutorials or methods of doing parallax scrolling in swift.
This is a SUPER simple way of starting a parallax background. WITH SKACTIONS! I am hoping it helps you understand the basics before moving to a harder but more effective way of coding this.
So I'll start with the code that get a background moving and then you try duplicating the code for the foreground or objects you want to put in your scene.
//declare ground picture. If Your putting this image over the top of another image (use a png file).
var groundImage = SKTexture(imageNamed: "background.jpg")
//make your SKActions that will move the image across the screen. this one goes from right to left.
var moveBackground = SKAction.moveByX(-groundImage.size().width, y: 0, duration: NSTimeInterval(0.01 * groundImage.size().width))
//This resets the image to begin again on the right side.
var resetBackGround = SKAction.moveByX(groundImage.size().width, y: 0, duration: 0.0)
//this moves the image run forever and put the action in the correct sequence.
var moveBackgoundForever = SKAction.repeatActionForever(SKAction.sequence([moveBackground, resetBackGround]))
//then run a for loop to make the images line up end to end.
for var i:CGFloat = 0; i<2 + self.frame.size.width / (groundImage.size().width); ++i {
var sprite = SKSpriteNode(texture: groundImage)
sprite.position = CGPointMake(i * sprite.size.width, sprite.size.height / 2)
sprite.runAction(moveBackgoundForever)
self.addChild(sprite)
}
}
//once this is done repeat for a forground or other items but them run at a different speed.
/*make sure your pictures line up visually end to end. Just duplicating this code will NOT work as you will see but it is a starting point. hint. if your using items like simple obstructions then using actions to spawns a function that creates that obstruction maybe a good way to go too. If there are more then two separate parallax objects then using an array for those objects would help performance. There are many ways to handle this so my point is simple: If you can't port it from ObjectiveC then rethink it in Swift. Good luck!
I found a class called ClippingNode that I can use on sprites to only display a specified rectangular area: https://github.com/njt1982/ClippingNode
One problem is that I need to do exactly the opposite, meaning I want the inverse of that. I want everything outside of the specified rectangle to be displayed, and everything inside to be taken out.
In my test I'm using a position of a sprite, which will update frame, so that will need to happen to meaning that new clipping rect will be defined.
CGRect menuBoundaryRect = CGRectMake(lightPuffClass.sprite.position.x, lightPuffClass.sprite.position.y, 100, 100);
ClippingNode *clipNode = [ClippingNode clippingNodeWithRect:menuBoundaryRect];
[clipNode addChild:darkMapSprite];
[self addChild:clipNode z:100];
I noticed the ClippingNode class allocs inside but I'm not using ARC (project too big and complex to update to ARC) so I'm wondering what and where I'll need to release too.
I've tried a couple of masking classes but whatever I mask fits over the entire sprite (my sprite covers the entire screen. Additionally the mask will need to move, so I thought glscissor would be a good alternative if I can get it to do the inverse.
You don't need anything out of the box.
You have to define a CCClippingNode with a stencil, and then set it to be inverted, and you're done. I added a carrot sprite to show how to add sprites in the clipping node in order for it to be taken into account.
#implementation ClippingTestScene
{
CCClippingNode *_clip;
}
And the implementation part
_clip = [[CCClippingNode alloc] initWithStencil:[CCSprite spriteWithImageNamed:#"white_board.png"]];
_clip.alphaThreshold = 1.0f;
_clip.inverted = YES;
_clip.position = ccp(self.boundingBox.size.width/2 , self.boundingBox.size.height/2);
[self addChild:_clip];
_img = [CCSprite spriteWithImageNamed:#"carrot.png"];
_img.position = ccp(-10.0f, 0.0f);
[_clip addChild:_img];
You have to set an extra flag for this to work though, but Cocos will spit out what you need to do in the console.
I once used CCScissorNode.m from https://codeload.github.com/NoodlFroot/ClippingNode/zip/master
The implementation (not what you are looking for the inverse) was something :
CGRect innerClippedLayer = CGRectMake(SCREENWIDTH/14, SCREENHEIGHT/6, 275, 325);
CCScissorNode *tmpLayer = [CCScissorNode scissorNodeWithRect:innerClippedLayer];
[self addChild:tmpLayer];
So for you it may be like if you know the area (rectangle area that you dont want to show i.e. inverse off) and you know the screen area then you can deduct the rectangle are from screen area. This would give you the inverse area. I have not did this. May be tomorrow i can post some code.