This is probably a simple question, however I have a problem trying to get a simple footstep sound to play every time my character makes contact with the ground:
Could someone kindly take a look at this:
// I want to play a sound when the walk is at frame 4
[[SimpleAudioEngine SharedEngine]playEffect:#"footstep.mp3"];
[pleaseplaymysoundatthisframe#"walk%04d.png"];
Any help much appreciated!
I think, much more simplier will be to calculate needed delay, that will allow to play your sound in the needed moment.
Or you can divide your animation in two pieces and create sequence of actions. The first action will be CCAnimate with 0-4 frames of animation, than CCCallFunc action that will call method to play sound, then CCAnimate action with the rest of your frames.
You could do something like this (mileage may vary, this is leaked from my frail memory):
// i suppose you have your animFrames in some local var animFrames
// and your CCSprite in some other local/iVar frameOne
float animationDuration=0.8f;
int animationFrames=12;
float fourthFrameTime=animationDuration*4/animationFrames;
float animFrameDelay=animationDuration/animationFrames;
id animateSprite=[CCAnimation animationWithFrames:animFrames delay:animFrameDelay];
id playFootsteps = [CCSequence actions:[CCDelayTime actionWithDuration:fourthFrameTime],
[CCCallFunc actionWithTarget:self
selector:#selector(playFootSteps)],nil];
id spawn = [CCSpawn actions:animateSprite,playFootsteps,nil];
[frameOne runAction:spawn];
// and somewhere an
-(void) playFootSteps{
[[SimpleAudioEngine SharedEngine]playEffect:#"footstep.mp3"];
}
if this is a recurring theme in your app, i suggest you create a meta sound manager that can be invoked via a home-grown protocol/interface from any object instance (typically using a 'singleton', to stick with cocos2d's general approach), and that you manage the complexity of sound start/stop/resume in there.
Related
I have a simple AVPlayer-based OS X app that plays local media. It has a skip forward and backward feature, based on -seekToTime:. On some media, there is an annoying 3-7 second delay in getting the media to continue playing (especially going forward). I have tried - seekToTime:toleranceBefore:toleranceAfter: , with variable tolerances. No luck.
Posting previously solved issue for the record... I noticed that the seekToTime: skipping worked fine when the playback was paused. I immediately (i.e., several weeks later) realized that it might make sense to stop the playback before seeking, then restarting. So far, problem is 100% solved, and it is blazing fast. Might be of some use to people trying to do smooth looping (but I don't know how to trigger the completion handler signaling the end of the loop). Don't know if it works with iOS. Sample code is attached:
-(void) movePlayheadToASpecificTime
{
// assumes this is a method for an AVPlayerView subclass, properly wired with IB
// self.player is a property of AVPlayerView, which points to an AVPlayer
// save the current rate
float currentPlayingRate = self.player.rate;
// may need fancier tweaking if the time scale changes within asset
int32_t timeScale = self.player.currentItem.asset.duration.timescale;
// stop playback
self.player.rate = 0;
Float64 desiredTimeInSeconds = 4.5; // or whatever
// convert desired time to CMTime
CMTime target = CMTimeMakeWithSeconds(desiredTimeInSeconds, timeScale);
// perform the move
[self.player seekToTime:target];
// restore playing rate
self.player.rate = currentPlayingRate;
}
I'm attempting to implement window-flipping identical to that in iWork -
https://dl.dropbox.com/u/2338382/Window%20Flipping.mov
However, I can't quite seem to find a straightforward way of doing this. Some tutorials suggest sticking snapshot-images of both sides of the window in a bigger, transparent window and animate those. This might work, but seems a bit hacky, and the sample code is always bloated. Some tutorials suggest using private APIs, and since this app may be MAS-bound, I'd like to avoid that.
How should I go about implementing this? Does anyone have any hints?
NSWindow+Flipping
I've rewritten the ancient code linked below into NSWindow+Flipping. You can grab these source files from my misc. Cocoa collection on GitHub, PCSnippets.
You can achieve this using CoreGraphics framework. Take a look at this:
- (void) flipWithDuration: (float) duration forwards: (BOOL) forwards
{
CGSTransitionSpec spec;
CGSTransitionHandle transitionHandle;
CGSConnection cid = CGSDefaultConnection;
spec.type = CGSFlip;
spec.option = 0x80 | (forwards ? 2 : 1);
spec.wid = [self windowNumber];
spec.backColor = nil;
transitionHandle = -1;
CGSNewTransition (cid, &spec, &transitionHandle);
CGSInvokeTransition (cid, transitionHandle, duration);
[[NSRunLoop currentRunLoop] runUntilDate:
[NSDate dateWithTimeIntervalSinceNow: duration]];
CGSReleaseTransition (cid, transitionHandle);
}
You can download sample project: here. More info here.
UPDATE:
Take a look at this project. It's actually what You need.
About this project:
This category on NSWindow allows you to switch one window for
another, using the "flip" animation popularized by Dashboard widgets.
This was a nice excuse to learn something about CoreImage and how to
use it in Cocoa. The demo app shows how to use it. Scroll to the end
to see what's new in this version!
Basically, all you need to do is something like:
[someWindow flipToShowWindow:someOtherWindow forward:YES];
However, this code makes some assumptions: — someWindow (the initial
window) is already visible on-screen. — someOtherWindow (the final
window) is not already visible on-screen. — Both windows can be
resized to the same size, and aren't too large or complicated — the
latter conditions being less important the faster your CPU/video card
is. — The windows won't go away while the animation is running. — The
user won't try to click on the animated window or do something while
the animation is running.
The implementation is quite straightforward. I move the final to the
same position and size as the initial window. I then position a larger
transparent window so it covers that frame. I render both window
contents into CIImages, hide both windows, and start the animation.
Each frame of the animation renders a perspective-distorted image into
the transparent window. When the animation is done, I show the final
window. Some tricks are used to make this faster; the flipping window
is setup only once; the final window is hidden by setting its alpha to
0.0, not by ordering it out and later ordering it back in again, for instance.
The main bottleneck is the CoreImage filter, and the first frame
always takes much longer to render — 4 or 6 times what it takes for
the remaining frames. I suppose this time is spent with setup and
downloading to the video card. So I calculate the time this takes and
draw a second frame at a stage where the rotation begins to show. The
animation begins at this point, but, if those first two frames took
too long, I stretch the duration to make sure that at least 5 more
frames will get rendered. This will happen with slow hardware or large
windows. At the end, I don't render the last frame at all and swap the
final window in instead.
This is the first time I've ever posted in a forum, so thanks in advance for anyone who takes the time to read/answer this question.
What I'm trying to create is basically a flipping coin animation, which starts off turning very fast and then slows down to stop with a (randomly generated) side facing upwards after about 8 seconds.
I've done the animation of a complete flip, which lasts about half a second, and made it in to a movieclip... now I'm stuck!
Any ideas how I might go about doing this in actionscript3?
The fastest way around this would be to use some very basic actionscript. First, create 2 animations (One heads, one tails). Now, you only need a single frame for this and don't need to place the movieclips on the stage. Use the following or similar code:
var whichSide:int = 0;
var coin1:coinAnimation1 = new coinAnimation1();
var coin2:coinAnimation2 = new coinAnimation2();
whichSide = math.Round(math.Random(1));
if(whichSide == 1)
{
addChild(coin1);
}
else
{
addChild(coin2);
}
Just don't forget to right click the movieclip and export for actionscript, giving the movieclips the class of: coinAnimation1 and coinAnimation2.
Hope this helps.
I've accomplished such animation on ´Keyframes´ using the Tweener class. You can easily tween on the keyframe parameter with specific transition...
Basic example:
Tweener.addTween(myMovieClip, {_frame:10, time:2.5});
More information about Tweener here
I am creating an HTTP Live Streaming Client for Mac that will control video playback on a large screen. My goal is to have a control UI on the main screen, and full screen video on the secondary screen.
Using AVFoundation, I have successfully been able to open the stream and control all aspects of it from my control UI, and I am now attempting to duplicate the video on a secondary screen. This is proving more difficult than I imagined...
On the control screen, I have an AVPlayerLayer that is displaying the video content from an AVPlayer. My goal was to create another AVPlayerLayer, and send it the same player so that both players are playing the same video at the same time in two different views. However, that is not working.
Digging deeper, I found this in the AVFoundation docs:
You can create arbitrary numbers of player layers with the same AVPlayer object. Only the most-recently-created player layer will actually display the video content on-screen.
This is actually useless to me, because I need the video showing correctly in both views.
I can create a new instance of AVPlayerItem from the same AVAsset, then create a new AVPlayer and add it to a new AVPlayerLayer and have video show up, but they are no longer in sync because they are two different players generating two different audio streams playing different parts of the same stream.
Does anyone have any suggestions on how to get the same AVPlayer content into two different views? Perhaps some sort of CALayer mirroring trick?
AVSyncronizedLayer may help. I'm using it differently (to syncronize two different media objects rather than the same one) but in principle it should be possible to load the same item twice and then use an AvSyncronized layer to keep them synced.
I see that this topic got very old, but I think it still would be helpful. You wrote that
I have an AVPlayerLayer that is displaying the video content from an AVPlayer. My goal was to create another AVPlayerLayer, and send it the same player so that both players are playing the same video at the same time in two different views. However, that is not working.
But, it's working. I just tried it in my project. Here's my code of layer initializations:
AVPlayerLayer *playerLayer = [AVPlayerLayer new];
[playerLayer setPlayer:_testPlayer];
playerLayer.frame = CGRectMake(0, 0, _videoView.frame.size.width, _videoView.frame.size.height);
playerLayer.contentsGravity = kCAGravityResizeAspect;
playerLayer.videoGravity = AVLayerVideoGravityResizeAspect;
_defaultTransform = playerLayer.affineTransform;
[_videoView.layer insertSublayer:playerLayer atIndex:0];
AVPlayerLayer *testLayer_1 = [AVPlayerLayer playerLayerWithPlayer:_testPlayer];
testLayer_1.frame = CGRectMake(100, 100, 200, 200);
testLayer_1.contentsGravity = kCAGravityResizeAspect;
testLayer_1.videoGravity = AVLayerVideoGravityResizeAspect;
[_videoView.layer insertSublayer:testLayer_1 atIndex:1];
And here's what I got:
As you can see, there're two AVPlayerLayers playing the same AVPlayerItem in the very perfect sync
Apple's docs now state this:
You can create arbitrary numbers of player layers with the same AVPlayer object, but you should limit the number of layers you create to avoid impacting playback performance.
link to docs
This does indeed work in my app as well.
I am trying to monitor a stream of video output from a FireWire camera. I have created an Interface Builder interface with buttons and an NSImageView. While image monitoring is occurring within an endless loop, I want to:
change some camera parameters on the fly (gain, gamma, etc.)
tell the monitoring to stop so I can save an image to a file (set a flag that stops the while loop)
Using the button features, I have been unable to loop the video frame monitor, while still looking for a button press (much like using the keypressed feature from C.) Two options present themselves:
Initiate a new run loop (for which I cannot get an autoreleasepool to function ...)
Initiate an NSOperation - how do I do this in a way which allows me to connect with an Xcode button push?
The documentation is very obtuse about the creation of such objects. If I create an NSOperation as per the examples I've found, there seems to be no way to communicate with it with an object from Interface Builder. When I create an NSRunLoop, I get an object leak error, and I can find no example of how to create an autoreleasepool that actually responds to the RunLoop I've created. Nevermind that I haven't even attempted to choose which objects get sampled by the secondary run loop ...
Because Objective C is (obviously!) not my native tongue, I am looking for solutions with baby steps, sorry to say ...
Thanks in advance
I've needed to do almost exactly the same as you, only with a continuous video display from the FireWire camera. In my case, I used the libdc1394 library to perform the frame capture and camera property adjustment for our FireWire cameras. I know you can also do this using some of the Carbon Quicktime functions, but I found libdc1394 to be a little easier to understand.
For the video capture loop, I tried a number of different approaches, from a separate thread that polls the camera and has locks around shared resources, to using one NSOperationQueue for interaction with the camera, and finally settled on using a CVDisplayLink to poll the camera in a way that matches the refresh rate of the screen.
The CVDisplayLink is configured using the following code:
CGDirectDisplayID displayID = CGMainDisplayID();
CVReturn error = kCVReturnSuccess;
error = CVDisplayLinkCreateWithCGDisplay(displayID, &displayLink);
if (error)
{
NSLog(#"DisplayLink created with error:%d", error);
displayLink = NULL;
}
CVDisplayLinkSetOutputCallback(displayLink, renderCallback, self);
and it calls the following function to trigger the retrieval of a new camera frame:
static CVReturn renderCallback(CVDisplayLinkRef displayLink,
const CVTimeStamp *inNow,
const CVTimeStamp *inOutputTime,
CVOptionFlags flagsIn,
CVOptionFlags *flagsOut,
void *displayLinkContext)
{
return [(SPVideoView *)displayLinkContext renderTime:inOutputTime];
}
The CVDisplayLink is started and stopped using the following:
- (void)startRequestingFrames;
{
CVDisplayLinkStart(displayLink);
}
- (void)stopRequestingFrames;
{
CVDisplayLinkStop(displayLink);
}
Rather than using a lock on the FireWire camera communications, whenever I need to adjust the exposure, gain, etc. I change corresponding instance variables and set the appropriate bits within a flag variable to indicate which settings to change. On the next retrieval of a frame, the callback method from the CVDisplayLink changes the appropriate settings on the camera to match the locally stored instance variables and clears that flag.
Display to the screen is handled through an NSOpenGLView (CAOpenGLLayer introduced too many visual artifacts when updating at this rate, and its update callbacks ran on the main thread). Apple has some extensions you can use to provide these frames as textures using DMA for better performance.
Unfortunately, nothing that I've described here is introductory-level stuff. I have about 2,000 lines of code for these camera-handling functions in our software and this took a long time to puzzle out. If Apple could add the manual camera settings adjustments to the QTKit Capture APIs, I could remove almost all of this.
If all you're trying to do is see/grab the output of a connected camera, the answer is probably neither.
Use QTKit's QTCaptureView. Problem solved. Want to grab a frame? Also no problem. Don't try to roll your own - QTKit's stuff is optimized and part of the OS. I'm pretty sure you can affect camera properties as you wanted but if not, plan B should work.
Plan b: Use a scheduled, recurring NSTimer to ask QTKit to grab a frame every so often ("how" linked above) and apply your image manipulations to the frame (maybe with Core Image) before displaying in your NSImageView.