calling a method when a particular CCAnimationFrame is displayed using CCAnimationFrameDisplayedNotification - animation

I would like to call a method that rotates a sprite when a particular frame in displayed. I got a few ideas from this post but I am having issues as to how I should implement it. How would I define my dictionary for accessing 3rd frame and link it to the selector? Below is what I've done. Please note that it's work-in-progress.
-(id) init
{
// always call "super" init
// Apple recommends to re-assign "self" with the "super's" return value
if( (self=[super init]) ) {
CCSpriteBatchNode* batchNode;
CCSpriteFrameCache* frameCache;
frameCache = [CCSpriteFrameCache sharedSpriteFrameCache];
[frameCache addSpriteFramesWithFile:#"cat-hd.plist"];
batchNode = [CCSpriteBatchNode batchNodeWithFile:#"cat-hd.pvr.ccz"];
[self addChild:batchNode];
CCSprite* wallbg = [CCSprite spriteWithSpriteFrameName:#"firstBg.png"];
wallbg.position= ccp(240.0, 160.0);
//wallbg.anchorPoint = ccp(0.5, 0.5);
[batchNode addChild:wallbg];
[[NSNotificationCenter defaultCenter] addObserver:self selector:#selector(rotateAnimationFrame:) name:CCAnimationFrameDisplayedNotification object:nil];
NSMutableArray* catAnimationArray = [[NSMutableArray alloc]init];
for(int i = 1; i < 7; i++) // i< number of frames in the plist File Name
{
CCLOG(#"item %d added", i);
[catAnimationArray addObject:
[frameCache spriteFrameByName:
[NSString stringWithFormat:#"blackCat%d.png", i]]]; }
CCSprite* catSprite = [CCSprite spriteWithSpriteFrameName: #"blackCat1.png"];
CGSize screenSize = [[CCDirector sharedDirector] winSize];
catSprite.position = ccp(screenSize.width/2, screenSize.height/2);
CCAnimation *animation = [CCAnimation animationWithSpriteFrames:catAnimationArray delay:0.3];
CCAnimationFrame* thirdFrame = [animation.frames objectAtIndex:2];
NSDictionary* uInfo = //how should I define my dictionary?
id action =[CCRepeatForever actionWithAction:[CCAnimate actionWithAnimation :animation]];
[catSprite runAction:action];
[batchNode addChild:catSprite];
}
return self;
}
// on "dealloc" you need to release all your retained objects
-(void)rotateAnimationFrame:(NSNotification*)notification {
NSDictionary *userInfoDictionary = [notification userInfo];
if ([[notification name] isEqualToString:#"CCAnimationFrameDisplayedNotification"]) {
CCSprite* thirdAnim = [CCSprite spriteWithSpriteFrame:#"blackCat3.png"];
CCRotateTo *rotateRight = [CCRotateBy actionWithDuration:0.2 angle:40.0];
[thirdAnim runAction:rotateRight];
}

Related

Metal Framework on macOS

I am creating a simple Texture display that essentially renders the Video frames in BGRA format through Metal display. I follow the same steps as told in Metal WWDC session. But I have problems in creating the render encoder. My code is
id <MTLDevice> device = MTLCreateSystemDefaultDevice();
id<MTLCommandQueue> commandQueue = [device newCommandQueue];
id<MTLLibrary> library = [device newDefaultLibrary];
// Create Render Command Descriptor.
MTLRenderPipelineDescriptor* renderPipelineDesc = [MTLRenderPipelineDescriptor new];
renderPipelineDesc.colorAttachments[0].pixelFormat = MTLPixelFormatBGRA8Unorm;
renderPipelineDesc.vertexFunction = [library newFunctionWithName:#"basic_vertex"];
renderPipelineDesc.fragmentFunction = [library newFunctionWithName:#"basic_fragment"];
NSError* error = nil;
id<MTLRenderPipelineState> renderPipelineState = [device newRenderPipelineStateWithDescriptor:renderPipelineDesc
error:&error];
id<MTLCommandBuffer> commandBuffer = [commandQueue commandBuffer];
MTLRenderPassDescriptor* renderPassDesc = [MTLRenderPassDescriptor renderPassDescriptor];
id<CAMetalDrawable> drawable = [_metalLayer nextDrawable];
MTLRenderPassColorAttachmentDescriptor* colorAttachmentDesc = [MTLRenderPassColorAttachmentDescriptor new];
colorAttachmentDesc.texture = drawable.texture;
colorAttachmentDesc.loadAction = MTLLoadActionLoad;
colorAttachmentDesc.storeAction = MTLStoreActionStore;
colorAttachmentDesc.clearColor = MTLClearColorMake(0, 0, 0, 1);
[renderPassDesc.colorAttachments setObject:colorAttachmentDesc atIndexedSubscript:0];
[inTexture replaceRegion:region
mipmapLevel:0
withBytes:imageBytes
bytesPerRow:CVPixelBufferGetBytesPerRow(_image)];
id<MTLRenderCommandEncoder> renderCmdEncoder = [commandBuffer renderCommandEncoderWithDescriptor:renderPassDesc];
[renderCmdEncoder setRenderPipelineState:_renderPipelineState];
[renderCmdEncoder endEncoding];
This code crashes in the line saying "No Render Targets Found"
id renderCmdEncoder = [commandBuffer renderCommandEncoderWithDescriptor:renderPassDesc];
I am not able to figure out where and how to set the render target.
This will work perfectly; if you need help implementing it, let me know:
#import UIKit;
#import AVFoundation;
#import CoreMedia;
#import <MetalKit/MetalKit.h>
#import <Metal/Metal.h>
#import <MetalPerformanceShaders/MetalPerformanceShaders.h>
#interface ViewController : UIViewController <MTKViewDelegate, AVCaptureVideoDataOutputSampleBufferDelegate> {
NSString *_displayName;
NSString *serviceType;
}
#property (retain, nonatomic) SessionContainer *session;
#property (retain, nonatomic) AVCaptureSession *avSession;
#end;
#import "ViewController.h"
#interface ViewController () {
MTKView *_metalView;
id<MTLDevice> _device;
id<MTLCommandQueue> _commandQueue;
id<MTLTexture> _texture;
CVMetalTextureCacheRef _textureCache;
}
#property (strong, nonatomic) AVCaptureDevice *videoDevice;
#property (nonatomic) dispatch_queue_t sessionQueue;
#end
#implementation ViewController
- (void)viewDidLoad {
NSLog(#"%s", __PRETTY_FUNCTION__);
[super viewDidLoad];
_device = MTLCreateSystemDefaultDevice();
_metalView = [[MTKView alloc] initWithFrame:self.view.bounds];
[_metalView setContentMode:UIViewContentModeScaleAspectFit];
_metalView.device = _device;
_metalView.delegate = self;
_metalView.clearColor = MTLClearColorMake(1, 1, 1, 1);
_metalView.colorPixelFormat = MTLPixelFormatBGRA8Unorm;
_metalView.framebufferOnly = NO;
_metalView.autoResizeDrawable = NO;
CVMetalTextureCacheCreate(NULL, NULL, _device, NULL, &_textureCache);
[self.view addSubview:_metalView];
self.sessionQueue = dispatch_queue_create( "session queue", DISPATCH_QUEUE_SERIAL );
if ([self setupCamera]) {
[_avSession startRunning];
}
}
- (BOOL)setupCamera {
NSLog(#"%s", __PRETTY_FUNCTION__);
#try {
NSError * error;
_avSession = [[AVCaptureSession alloc] init];
[_avSession beginConfiguration];
[_avSession setSessionPreset:AVCaptureSessionPreset640x480];
// get list of devices; connect to front-facing camera
self.videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if (self.videoDevice == nil) return FALSE;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:self.videoDevice error:&error];
[_avSession addInput:input];
dispatch_queue_t sampleBufferQueue = dispatch_queue_create("CameraMulticaster", DISPATCH_QUEUE_SERIAL);
AVCaptureVideoDataOutput * dataOutput = [[AVCaptureVideoDataOutput alloc] init];
[dataOutput setAlwaysDiscardsLateVideoFrames:YES];
[dataOutput setVideoSettings:#{(id)kCVPixelBufferPixelFormatTypeKey: #(kCVPixelFormatType_32BGRA)}];
[dataOutput setSampleBufferDelegate:self queue:sampleBufferQueue];
[_avSession addOutput:dataOutput];
[_avSession commitConfiguration];
} #catch (NSException *exception) {
NSLog(#"%s - %#", __PRETTY_FUNCTION__, exception.description);
return FALSE;
} #finally {
return TRUE;
}
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
{
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);
CVMetalTextureRef texture = NULL;
CVReturn status = CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault, _textureCache, pixelBuffer, NULL, MTLPixelFormatBGRA8Unorm, width, height, 0, &texture);
if(status == kCVReturnSuccess)
{
_metalView.drawableSize = CGSizeMake(width, height);
_texture = CVMetalTextureGetTexture(texture);
_commandQueue = [_device newCommandQueue];
CFRelease(texture);
}
}
}
- (void)drawInMTKView:(MTKView *)view {
// creating command encoder
if (_texture) {
id<MTLCommandBuffer> commandBuffer = [_commandQueue commandBuffer];
id<MTLTexture> drawingTexture = view.currentDrawable.texture;
// set up and encode the filter
MPSImageGaussianBlur *filter = [[MPSImageGaussianBlur alloc] initWithDevice:_device sigma:5];
[filter encodeToCommandBuffer:commandBuffer sourceTexture:_texture destinationTexture:drawingTexture];
// committing the drawing
[commandBuffer presentDrawable:view.currentDrawable];
[commandBuffer commit];
_texture = nil;
}
}
- (void)mtkView:(MTKView *)view drawableSizeWillChange:(CGSize)size {
}
#end
you should try one of following points
1.Instead of creating new render pass descriptor,use current render pass descriptor object from MTKView object.this render pass descriptor already will be configured.you need not set anything.try the sample code given below-
if let currentPassDesc = view.currentRenderPassDescriptor,
let currentDrawable = view.currentDrawable
{
let renderCommandEncoder =
commandBuffer.makeRenderCommandEncoder(descriptor: currentPassDesc)
renderCommandEncoder.setRenderPipelineState(renderPipeline)
//set vertex buffers and call draw apis
.......
.......
commandBuffer.present(currentDrawable)
}
2.you are creating a new render pass descriptor and then setting its color attachment by the texture of drawable object so instead of doing this you should create a new texture object and then set usage of this texture as render target.then you will get content rendered in your new texture but it will be not displayed on screen so to get displayed the content of your textue you have to copy the content of your texture in drawable texture and then present drawable.
below is the code of making render target -
renderPassDescriptor.colorAttachments[0].clearColor =
MTLClearColor(red:
0.0,green: 0.0,blue: 0.0,alpha: 1.0)
renderPassDescriptor.colorAttachments[0].loadAction = .clear
renderPassDescriptor.colorAttachments[0].storeAction = .store
renderPassDescriptor.depthAttachment.clearDepth = 1.0
renderPassDescriptor.depthAttachment.loadAction = .clear
renderPassDescriptor.depthAttachment.storeAction = .dontCare
let view = self.view as!MTKView
let textDesc = MTLTextureDescriptor.texture2DDescriptor(pixelFormat:
.bgra8Unorm, width: Int(view.frame.width),
height: Int(view.frame.height), mipmapped: false)
textDesc.depth = 1
//see below line
textDesc.usage =
[MTLTextureUsage.renderTarget,MTLTextureUsage.shaderRead]
textDesc.storageMode = .private
mainPassFrameBuffer = device.makeTexture(descriptor: textDesc)
renderPassDescriptor.colorAttachments[0].texture = mainPassFrameBuffer

Several Hyperlinks in NSTableView Cell

At the moment I have an NSTableView with a custom NSTextFieldCell that holds an NSAttributedString with some ranges with the NSLinkAttribute. I tried to integrate code from Apple's TableViewLinks example and Toomas Vather's HyperlinkTextField.
I implemented the -trackMouse Function like this:
- (BOOL)trackMouse:(NSEvent *)theEvent inRect:(NSRect)cellFrame ofView:(NSView *)controlView untilMouseUp:(BOOL)flag {
BOOL result = YES;
NSUInteger hitTestResult = [self hitTestForEvent:theEvent inRect:cellFrame ofView:controlView];
if ((hitTestResult & NSCellHitContentArea) != 0) {
result = [super trackMouse:theEvent inRect:cellFrame ofView:controlView untilMouseUp:flag];
theEvent = [NSApp currentEvent];
hitTestResult = [self hitTestForEvent:theEvent inRect:cellFrame ofView:controlView];
if ((hitTestResult & NSCellHitContentArea) != 0) {
NSAttributedString* attrValue = [self.objectValues objectForKey:#"theAttributedString"];
NSMutableAttributedString* attributedStringWithLinks = [[NSMutableAttributedString alloc] initWithAttributedString:attrValue];
//HOW TO GET A RIGHT INDEX?
NSTableView* myTableView = (NSTableView *)[self controlView];
NSPoint eventPoint = [myTableView convertPoint:[theEvent locationInWindow] fromView:nil];
NSInteger myRow = [myTableView rowAtPoint:eventPoint];
NSRect myBetterViewRect = [myTableView rectOfRow:myRow];
__block NSTextView* myTextView = [[NSTextView alloc] initWithFrame:myBetterViewRect];
[myTextView.textStorage setAttributedString:attributedStringWithLinks];
NSPoint localPoint = [myTextView convertPoint:eventPoint fromView:myTableView];
NSUInteger index = [myTextView.layoutManager characterIndexForPoint:localPoint inTextContainer:myTextView.textContainer fractionOfDistanceBetweenInsertionPoints:NULL];
if (index != NSNotFound)
{
NSMutableArray* myHyperlinkInfos = [[NSMutableArray alloc] init];
NSRange stringRange = NSMakeRange(0, [attrValue length]);
[attrValue enumerateAttribute:NSLinkAttributeName inRange:stringRange options:0 usingBlock:^(id value, NSRange range, BOOL* stop)
{
if (value)
{
NSUInteger rectCount = 0;
NSRectArray rectArray = [myTextView.layoutManager rectArrayForCharacterRange:range withinSelectedCharacterRange:range inTextContainer:myTextView.textContainer rectCount:&rectCount];
for (NSUInteger i = 0; i < rectCount; i++)
{
[myHyperlinkInfos addObject:#{kHyperlinkInfoCharacterRangeKey : [NSValue valueWithRange:range], kHyperlinkInfoURLKey : value, kHyperlinkInfoRectKey : [NSValue valueWithRect:rectArray[i]]}];
}
}
}];
for (NSDictionary* info in myHyperlinkInfos)
{
NSRange range = [[info objectForKey:kHyperlinkInfoCharacterRangeKey] rangeValue];
if (NSLocationInRange(index, range))
{
NSURL* url = [NSURL URLWithString:[info objectForKey:kHyperlinkInfoURLKey]];
[[NSWorkspace sharedWorkspace] openURL:url];
}
}
}
}
}
return result;}
The character-Index when clicking into the cell's (nstextview's) text appears not to fit. So even if there are more than one link in the text, usually the last link is opened. My guess is that I donĀ“t get the nsrect of the clicked cell. If so, how could I get the right NSRect?
I am glad for any suggestions, comments, code pieces - or simpler solutions (even if this would include switching to a view-based tableview).
Thanks.

Cocos2d Two animations for one sprite

I animate my character like that :
-(void) createHero
{
_batchNode = [CCSpriteBatchNode batchNodeWithFile:#"Snipe1.png"];
[self addChild:_batchNode];
[[CCSpriteFrameCache sharedSpriteFrameCache] addSpriteFramesWithFile:#"Snipe1.plist"];
//gather list of frames
NSMutableArray *runAnimFrames = [NSMutableArray array];
for(int i = 1; i <= 7; ++i)
{
[runAnimFrames addObject:
[[CCSpriteFrameCache sharedSpriteFrameCache] spriteFrameByName:
[NSString stringWithFormat:#"run000%d.png", i]]];
}
//create sprite and run the hero
self.hero = [CCSprite spriteWithSpriteFrameName:#"run0001.png"];
_hero.anchorPoint = CGPointZero;
_hero.position = self.heroRunningPosition;
//create the animation object
CCAnimation *runAnim = [CCAnimation animationWithFrames:runAnimFrames delay:1/30.0f];
self.runAction = [CCRepeatForever actionWithAction: [CCAnimate actionWithAnimation:runAnim restoreOriginalFrame:YES]];
[_hero runAction:_runAction];
[_batchNode addChild:_hero z:0];
}
This works fine an my character is running, but now i want a second animation when he jumps. At the moment i make it like that:
-(void)changeHeroImageDuringJump
{
[_hero setTextureRect:[[CCSpriteFrameCache sharedSpriteFrameCache] spriteFrameByName:#"run0007.png"].rect];
}
But now i want a second plist with a second png, so i get a whole new animation when the character jumps. How can i implement that ?
In my case, i implemented an AnimatedSprite class that will handle this for me. This way, I add files like so:
NSDictionary* anims = [NSDictionary dictionaryWithObjectsAndKeys:
#"Animations/Character/idle_anim.plist", #"Idle",
#"Animations/Character/walk_anim.plist", #"Walk",
#"Animations/Character/run_anim.plist", #"Run", nil];
CCNode* myNode = [[AnimatedSprite alloc] initWithDictionary:anims
spriteFrameName: #"character_idle_01.png"
startingIndex:#"Idle"];
Changing the animation is as simple as:
[myNode setAnimation: #"Run"];
Heres my implementation This is the .h
#interface AnimatedSprite : CCSprite
{
NSMutableDictionary* _spriteAnimations;
CCAction* _currentAnimation;
NSString* _currentAnimationName;
bool _initialized;
}
- (id) initWithTextureName:(NSString*) textureName;
- (id) initWithArray: (NSArray*) animList spriteFrameName: (NSString*) startingSprite startingIndex: (int)startingIndex;
- (id) initWithDictionary:(NSDictionary *)anims spriteFrameName:(NSString *)startingSprite startingIndex:(NSString *)startingAnim;
- (void) addAnimation: (NSString*) animationName andFilename: (NSString*) plistAnim;
- (void) setAnimationIndex: (int) index;
- (void) setAnimation: (NSString*) animationName;
#end
And this is the .m
#import "AKHelpers.h"
#implementation AnimatedSprite
NSMutableDictionary* _spriteAnimations;
- (id) initWithTextureName:(NSString*) textureName
{
CCTexture2D* texture = [[CCTextureCache sharedTextureCache] addImage:textureName];
CCSpriteFrame* frame = [CCSpriteFrame frameWithTexture:texture rect: CGRectMake(0, 0, 1, 1)];
if ((self=[super initWithSpriteFrame:frame]))
{
_currentAnimationName = nil;
_currentAnimation = nil;
_spriteAnimations = [[NSMutableDictionary alloc] init ];
_initialized = true;
}
return self;
}
- (id) initWithArray: (NSArray*) animList spriteFrameName: (NSString*) startingSprite startingIndex: (int)startingIndex
{
_initialized = false;
_spriteAnimations = [[NSMutableDictionary alloc] init];
// Add animations as numbers from 0 to animList.count
int i = 0;
for (NSString* anim in animList)
{
[self addAnimation: [NSString stringWithFormat:#"%d", i] andFilename:anim];
i++;
}
if ((self = [super initWithSpriteFrameName:startingSprite]))
{
_currentAnimationName = nil;
_currentAnimation = nil;
[self setAnimationIndex: startingIndex];
_initialized = true;
}
return self;
}
- (id) initWithDictionary:(NSDictionary *)anims spriteFrameName:(NSString *)startingSprite startingIndex:(NSString *)startingAnim
{
_initialized = false;
_spriteAnimations = [[NSMutableDictionary alloc] init];//[[NSMutableArray alloc] init];
// Add animations
for (NSString* key in anims)
{
[self addAnimation: key andFilename: [anims objectForKey: key] ];
}
if ((self = [super initWithSpriteFrameName:startingSprite]))
{
_currentAnimationName = nil;
_currentAnimation = nil;
[self setAnimation: startingAnim];
_initialized = true;
}
return self;
}
- (void) dealloc
{
[_currentAnimationName release];
[_spriteAnimations release];
[super dealloc];
}
- (void) setAnimation: (NSString*) animationName
{
if (![_currentAnimationName isEqualToString:animationName])
{
[_currentAnimationName release];
_currentAnimationName = [animationName copy];
// Stop current animation
if (_currentAnimation != nil)
[self stopAction:_currentAnimation];
//[self stopAllActions];
// Apply animation
NSDictionary* clip = [_spriteAnimations objectForKey: animationName];
if (clip)
{
_currentAnimation = [AKHelpers actionForAnimationClip:clip];
if (_currentAnimation)
[self runAction:_currentAnimation];
}
else
{
_currentAnimation = nil;
}
}
}
- (void) setAnimationIndex: (int) index
{
[self setAnimation: [NSString stringWithFormat:#"%d", index]];
}
- (void) addAnimation: (NSString*) animationName andFilename: (NSString*) plistAnim
{
NSDictionary *clip = [AKHelpers animationClipFromPlist:plistAnim];
if (clip)
{
[_spriteAnimations setObject:clip forKey:animationName];
if (_initialized && [_spriteAnimations count] == 1)
{
[self setAnimation:animationName];
}
}
}
#end
Create two different animation actions for running and jumping. Run those actions on need basis.

Cocos2d populate layer based on menu in a different layer

I'm working on a species ID app and would like to populate a layer with sprites based on which animal you select on the main layer. I've made each animal a menu item, and can get my info layer to appear when pressing the button, but how can I set it up so the layer shows the right data depending on which animal you select? The info layer is not a full screen layer, but rather an overlaying layer that only fills about 75% of the screen, which is why I'm going with a layer rather than a scene. I know I can create a new layer for each animal (approx 50) and code it so each button calls its own layer, but I think populating based on which button is pressed would make for cleaner code. If flamingoButton is pressed, sprite is filled with flamingo.png and label is populated with flamingo information. How do I get my info layer to listen to the buttons on the main layer?
MainLayer.m code:
-(id) init
{
if( (self=[super init]))
{
CCMenuItemImage *flamingoButton = [CCMenuItemImage itemFromNormalImage:#"Explore-sign.png" selectedImage:#"Explore-sign.png" target:self selector:#selector(showSecondLayer:)];
flamingoButton.position = CGPointMake(0, 60);
flamingoButton.tag = 101;
CCMenu *menu = [CCMenu menuWithItems:flamingoButton, nil];
[self addChild:menu];
}
return self;
}
-(void) showSecondLayer: (id) sender
{
CCMenuItemImage *item = (CCMenuItemImage *) sender;
int itemID = item.tag;
secondLayer = [SecondLayer node];
secondLayer.position = CGPointMake(0, 700);
[self addChild:secondLayer];
CCMoveTo *moveLayer = [CCMoveTo actionWithDuration:1.0 position:CGPointMake(0, 0)];
[secondLayer runAction:moveLayer];
}
SecondLayer.m (the info layer)
-(id) init
{
if( (self=[super init]))
{
//Change this sprite image based on button from main layer. I don't have it coded in yet, but I understand the concept of putting a variable in the file string using %# or %d
CCSprite *infoCard = [CCSprite spriteWithFile:#"species1.png"];
infoCard.anchorPoint = CGPointMake(0.5, 0);
infoCard.position = CGPointMake(512, 0);
[self addChild:infoCard];
}
return self;
}
Ok, this might work:
//MainLayer:
-(id) init
{
if( (self=[super init]))
{
CCMenuItem *flamingoButton = [CCMenuItemImage itemFromNormalImage:#"Explore-sign.png"
selectedImage:#"Explore-sign.png"
target:self
selector:#selector(showSecondLayer:)];
flamingoButton.position = ccp(0, 60);
flamingoButton.tag = 1;
CCMenu *menu = [CCMenu menuWithItems:flamingoButton, nil];
[self addChild:menu];
}
return self;
}
-(void) showSecondLayer: (CCMenuItem*) sender
{
secondLayer = [SecondLayer layerWithTag:[sender tag]];
secondLayer.position = ccp(0, 700);
[self addChild:secondLayer];
CCMoveTo *moveLayer = [CCMoveTo actionWithDuration:1.0 position:ccp(0, 0)];
[secondLayer runAction:moveLayer];
}
//Second Layer.h
+(id)layerWithTag:(NSInteger)aTag;
-(id) initWithTag:(NSInteger)aTag;
//Second Layer.m:
+(id)layerWithTag:(NSInteger)aTag {
return [[[SecondLayer alloc] initWithTag:aTag] autorelease];
}
-(id) initWithTag:(NSInteger)aTag
{
if( (self=[super init]))
{
//Change this sprite image based on button from main layer. I don't have it coded in yet, but I understand the concept of putting a variable in the file string using %# or %d
CCSprite *infoCard = [CCSprite spriteWithFile:[NSString stringWithFormat:#"species%d.png", aTag]];
infoCard.anchorPoint = ccp(0.5, 0);
infoCard.position = ccp(512, 0);
[self addChild:infoCard];
}
return self;
}
EDIT:
Even though the previous solution works, it's not intuitive, and I feel I am breaking some OOP concepts. Most importantly, it is only useable given that your info about the animal can be retrieved using a single int! .. Using it this way is a BIT better, it's totally up to you to decide:
Ehm, so, I would suggest you set up an Entity Class first:
//AnimalResources.h
#import "Blahblahblah"
//Give it a good name, I was always bad at Science:
#interface AnimalResources {
//load all your properties:
NSString* info;
CCSprite* sprite;
...
}
//set the properties as needed:
//Make sure you properly manage this!! It is retained!
#property (nonatomic, retain) CCSprite* sprite;
...
//method prototype (signature.. am not sure)
//Now, we shall build on the fact that it will be easy for you to map an integer to the right resources:
+(id)animalResourcesWithTag:(NSInteger)aTag;
-(id)initAnimalResourcesWithTag:(NSInteger)aTag;
//AnimalResources.m:'
#synthesize sprite, ... ;
+(id)animalResourcesWithTag:(NSInteger)aTag {
[[[AnimalResources alloc] initAnimalResourcesWithTag:aTag] autorelease];
}
-(id)initAnimalResourcesWithTag:(NSInteger)aTag {
if ((self = [super init])) {
//use tag to retrieve the resources:
//might use the stringFormat + %d approach, or have a dictionary/array plist, that maps an int to a dictionary of resource keys.
//string way of doing things:
self.sprite = [CCSprite spriteWithFile:[NSString stringWithFormat:#"species%d.png", aTag]];
...
//Dictionary: dict/array is an NSDictionary/NSArray read from disk sometime. Don't read it here, since it
//will read the file from disk many times if you do --> BAD. I could explain a rough way to do that if you
//need help
animalDict = [dict objectForKey:[NSString stringWithFormat:#"species%d.png", aTag]];
//OR...
animalDict = [array objectAtIndex:aTag];
//better to have #"spriteNameKey" defined in a macro somewhere: #define kAnimalResourceKeySprite #"SpriteKey"
self.sprite = [CCSprite spriteWithFile:[animalDict objectForKey:#"SpriteNameKey"]];
....
}
return self;
}
Phew! Then .. you guessed it!
-(void) showSecondLayer: (CCMenuItem*) sender
{
secondLayer = [SecondLayer layerWithAnimalResources:[AnimalResources animalResourcesWithTag:[sender tag]]];
secondLayer.position = ccp(0, 700);
[self addChild:secondLayer];
CCMoveTo *moveLayer = [CCMoveTo actionWithDuration:1.0 position:ccp(0, 0)];
[secondLayer runAction:moveLayer];
}
//Second Layer.h
+(id)layerWithAnimalResources:(AnimalResources*)resource;
-(id)initWithAnimalResources:(AnimalResources*)resource;
//Second Layer.m:
+(id)layerWithAnimalResources:(AnimalResources*)resource {
return [[[SecondLayer alloc] initWithAnimalResources:aTag] autorelease];
}
-(id) initWithAnimalResources:(AnimalResources*)resource
{
if( (self=[super init]))
{
//Change this sprite image based on button from main layer. I don't have it coded in yet, but I understand the concept of putting a variable in the file string using %# or %d
CCSprite *infoCard = [resource sprite];
infoCard.anchorPoint = ccp(0.5, 0);
infoCard.position = ccp(512, 0);
[self addChild:infoCard];
}
return self;
}
Give each menu item a unique id. In the method which you invoke on the tap of the button, you can reference the id of the sender. Use this id to populate the new layer with the unique information.
- (void) buttonPressed: (id) sender
{
MenuItem* item = (MenuItem*) sender;
int itemID = item.tag;
// Get unique data based on itemID and add new layer
}
EDIT: Per your code updates
-(void) showSecondLayer: (id) sender
{
CCMenuItemImage *item = (CCMenuItemImage *) sender;
int itemID = item.tag;
secondLayer = [SecondLayer node];
[secondLayer setItem: itemID]; // ADDED
secondLayer.position = CGPointMake(0, 700);
[self addChild:secondLayer];
CCMoveTo *moveLayer = [CCMoveTo actionWithDuration:1.0 position:CGPointMake(0, 0)];
[secondLayer runAction:moveLayer];
}
SecondLayer.m (the info layer)
-(id) init
{
if( (self=[super init]))
{
// Removed
}
return self;
}
-(void) setItem: (int) item
{
CCSprite *infoCard = [CCSprite spriteWithFile:[NSString stringWithFormat:#"species%d", item]];
infoCard.anchorPoint = CGPointMake(0.5, 0);
infoCard.position = CGPointMake(512, 0);
[self addChild:infoCard];
}

XCODE - (iOS) Timing / synchronising a view behaving like a slide show to a video

This one has been doing my head in for months - So time to swallow my pride and reach out for a little help. At the moment this is being done in UIWebView as HTML5/JS controlled system. But UIWebview frankly sux and looking to make this last component native too.
I have a collection of videos and at specific timed points during the video, I am calling a page of instructions that relate to the timed period in the video. The video controls also act as a controller for the instructions pages. So whatever timed point is reached, the corresponding page is animated into place.
I've looked in many, many options, with the closest coming in with http video streaming and using timed metadata to initiate a view, but I am containing the videos locally on the device. And, as yet cannot find anything that looks like it will work. Seems simple enough in principle, but I'll be damned if I can find a decent solution...
Any ideas / pointers?
Here's the last attempt at going native with this before the remainder of my hair fell out - I think I may be seeing where I was heading in the wrong direction, but if you can spare a few moments, I'd really appreciate it!
OBJECTIVE is to have a shoutOut that lives below the video that contains a page of instructions. At x seconds, the content will be refreshed to correspond to that portion of the video and persist until the next shoutOut for fresh content. This I have managed to achieve successfully. Where I have been falling down (a lot) is when I scrub the video back to a previous section, the shoutOut content remains at the position from which I scrubbed and remains there permanently. Or as the code is below, simply doesn't re-apear as it is set to a timed visible duration.
Anyway, here's the code...
Header:
// START:import
#import <UIKit/UIKit.h>
// START_HIGHLIGHT
#import <MediaPlayer/MPMoviePlayerController.h>
#import "CommentView.h"
// END_HIGHLIGHT
// START:def
// START:wiring
#interface MoviePlayerViewController : UIViewController {
UIView *viewForMovie;
// END:wiring
// START_HIGHLIGHT
MPMoviePlayerController *player;
// END_HIGHLIGHT
// START:wiring
UILabel *onScreenDisplayLabel;
UIScrollView *myScrollView;
NSMutableArray *keyframeTimes;
NSArray *shoutOutTexts;
NSArray *shoutOutTimes;
}
#property (nonatomic, retain) IBOutlet UIView *viewForMovie;
// END:wiring
// START_HIGHLIGHT
#property (nonatomic, retain) MPMoviePlayerController *player;
// END_HIGHLIGHT
#property (nonatomic, retain) IBOutlet UILabel *onScreenDisplayLabel;
#property (nonatomic, retain) IBOutlet UIScrollView *myScrollView;
#property (nonatomic, retain) NSMutableArray *keyframeTimes;
// START_HIGHLIGHT
-(NSURL *)movieURL;
- (void)timerAction:(NSTimer*)theTimer;
- (void) playerThumbnailImageRequestDidFinish:(NSNotification*)notification;
- (void)handleTapFrom:(UITapGestureRecognizer *)recognizer;
- (IBAction) getInfo:(id)sender;
- (void)removeView:(NSTimer*)theTimer;
// END_HIGHLIGHT
// START:wiring
#end
// END:def
// END:wiring
// END:import
Main:
#implementation MoviePlayerViewController
// START:synth
#synthesize player;
#synthesize viewForMovie;
#synthesize onScreenDisplayLabel;
#synthesize myScrollView;
#synthesize keyframeTimes;
// END:synth
// Implement loadView to create a view hierarchy programmatically, without using a nib.
// START:viewDidLoad
// START:viewDidLoad1
- (void)viewDidLoad {
[super viewDidLoad];
keyframeTimes = [[NSMutableArray alloc] init];
shoutOutTexts = [[NSArray
arrayWithObjects:#"This is a test\nLabel at 2 secs ",
#"This is a test\nLabel at 325 secs",
nil] retain];
shoutOutTimes = [[NSArray
arrayWithObjects:[[NSNumber alloc] initWithInt: 2],
[[NSNumber alloc] initWithInt: 325],
nil] retain];
self.player = [[MPMoviePlayerController alloc] init];
self.player.contentURL = [self movieURL];
// END:viewDidLoad1
self.player.view.frame = self.viewForMovie.bounds;
self.player.view.autoresizingMask =
UIViewAutoresizingFlexibleWidth |
UIViewAutoresizingFlexibleHeight;
[self.viewForMovie addSubview:player.view];
[self.player play];
// START_HIGHLIGHT
[NSTimer scheduledTimerWithTimeInterval:1.0f target:self selector:#selector(timerAction:) userInfo:nil repeats:YES];
// END_HIGHLIGHT
// START:viewDidLoad1
[self.view addSubview:self.myScrollView];
[[NSNotificationCenter defaultCenter]
addObserver:self
selector:#selector(movieDurationAvailable:)
name:MPMovieDurationAvailableNotification
object:nil];
}
// END:viewDidLoad
// END:viewDidLoad1
// START:movieURL
-(NSURL *)movieURL
{
NSBundle *bundle = [NSBundle mainBundle];
NSString *moviePath =
[bundle
pathForResource:#"BigBuckBunny_640x360"
ofType:#"m4v"];
if (moviePath) {
return [NSURL fileURLWithPath:moviePath];
} else {
return nil;
}
}
// END:movieURL
int position = 0;
- (void)timerAction:(NSTimer*)theTimer {
NSLog(#"hi");
int count = [shoutOutTimes count];
NSLog(#"count is at %d", count);
if (position < count) {
NSNumber *timeObj = [shoutOutTimes objectAtIndex:position];
int time = [timeObj intValue];
NSLog(#"time is at %d", time);
if (self.player.currentPlaybackTime >= time) {
CommentView *cview = [[CommentView alloc]
initWithText:[shoutOutTexts objectAtIndex:position]];
[self.player.view addSubview:cview];
position++;
[NSTimer scheduledTimerWithTimeInterval:4.0f target:self selector:#selector(removeView:) userInfo:cview repeats:NO];
}
}
}
- (void)removeView:(NSTimer*)theTimer {
UIView *view = [theTimer userInfo];
[view removeFromSuperview];
}
/*
// Implement viewDidLoad to do additional setup after loading the view, typically from a nib.
- (void)viewDidLoad {
[super viewDidLoad];
}
*/
// Override to allow orientations other than the default portrait orientation.
- (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation {
return YES;
}
- (void)didReceiveMemoryWarning {
// Releases the view if it doesn't have a superview.
[super didReceiveMemoryWarning];
// Release any cached data, images, etc that aren't in use.
}
- (void)viewDidUnload {
// Release any retained subviews of the main view.
// e.g. self.myOutlet = nil;
}
- (void)dealloc {
[super dealloc];
}
- (void) movieDurationAvailable:(NSNotification*)notification {
MPMoviePlayerController *moviePlayer = [notification object];
int duration = [moviePlayer duration];
[[NSNotificationCenter defaultCenter]
addObserver:self
selector:#selector(playerThumbnailImageRequestDidFinish:)
name:MPMoviePlayerThumbnailImageRequestDidFinishNotification
object:nil];
NSMutableArray *times = [[NSMutableArray alloc] init];
for(int i = 0; i < 20; i++) {
[times addObject:[NSNumber numberWithInt:5+i*((duration)/20)]];
}
[self.player requestThumbnailImagesAtTimes:times timeOption: MPMovieTimeOptionNearestKeyFrame];
}
int p = 0;
int ll=0;
- (void) playerThumbnailImageRequestDidFinish:(NSNotification*)notification {
NSDictionary *userInfo;
userInfo = [notification userInfo];
NSNumber *timecode;
timecode = [userInfo objectForKey: #"MPMoviePlayerThumbnailTimeKey"];
[keyframeTimes addObject: timecode];
UIImage *image;
image = [userInfo objectForKey: #"MPMoviePlayerThumbnailImageKey"];
int width = image.size.width;
int height = image.size.height;
float newwidth = 75 * ((float)width / (float)height);
self.myScrollView.contentSize = CGSizeMake((newwidth + 2) * 20, 75);
UIImageView *imgv = [[UIImageView alloc] initWithImage:image];
[imgv setUserInteractionEnabled:YES];
[imgv setFrame:CGRectMake(ll, 0, newwidth, 75.0f)];
ll+=newwidth + 2;
UITapGestureRecognizer *tapRecognizer = [[UITapGestureRecognizer alloc]
initWithTarget:self action:#selector(handleTapFrom:)];
[tapRecognizer setNumberOfTapsRequired:1];
[imgv addGestureRecognizer:tapRecognizer];
[tapRecognizer release];
[myScrollView addSubview:imgv];
}
- (void) getInfo:(id)sender
{
MPMovieMediaTypeMask mask = self.player.movieMediaTypes;
NSMutableString *mediaTypes = [[NSMutableString alloc] init];
if (mask == MPMovieMediaTypeMaskNone) {
[mediaTypes appendString:#"Unknown Media Type"];
} else {
if (mask & MPMovieMediaTypeMaskAudio) {
[mediaTypes appendString:#"Audio"];
}
if (mask & MPMovieMediaTypeMaskVideo) {
[mediaTypes appendString:#"Video"];
}
}
MPMovieSourceType type = self.player.movieSourceType;
NSMutableString *sourceType = [[NSMutableString alloc] initWithString:#""];
if (type == MPMovieSourceTypeUnknown) {
[sourceType appendString:#"Source Unknown"];
} else if (type == MPMovieSourceTypeFile) {
[sourceType appendString:#"File"];
} else if (type == MPMovieSourceTypeStreaming) {
[sourceType appendString:#"Streaming"];
}
CGSize size = self.player.naturalSize;
onScreenDisplayLabel.text = [NSString stringWithFormat:#"[Type: %#] [Source: %#] [Time: %.1f of %.f secs] [Playback: %.0fx] [Size: %.0fx%.0f]",
mediaTypes,
sourceType,
self.player.currentPlaybackTime,
self.player.duration,
self.player.currentPlaybackRate,
size.width,
size.height];
}
- (void)handleTapFrom:(UITapGestureRecognizer *)recognizer {
NSArray *subviews = [myScrollView subviews];
for (int i = 0; i < 20; i++) {
if (recognizer.view == [subviews objectAtIndex:i]) {
NSNumber *num = [keyframeTimes objectAtIndex:i];
self.player.currentPlaybackTime = [num intValue];
return;
}
}
}
#end
The Comment View Header:
#import <UIKit/UIKit.h>
#interface CommentView : UIView {
}
- (id)initWithFrame:(CGRect)frame andText:(NSString *) text;
- (id)initWithText:(NSString *) text;
#end
The Comment View Main:
#import "CommentView.h"
#implementation CommentView
- (id)initWithFrame:(CGRect)frame andText:(NSString *) text {
if ((self = [super initWithFrame:frame])) {
UIImage *image = [UIImage imageNamed:#"comment.png"];
UIImageView *imageView = [[UIImageView alloc] initWithImage:image];
[self addSubview:imageView];
CGRect rect = CGRectMake(20, 20, 200.0f, 90.0f);
UILabel *label = [[UILabel alloc] initWithFrame:rect];
label.text = text;
label.numberOfLines = 3;
label.adjustsFontSizeToFitWidth = YES;
label.textAlignment = UITextAlignmentCenter;
label.backgroundColor = [UIColor clearColor];
[self addSubview:label];
}
return self;
}
- (id)initWithText:(NSString *) text {
if ((self = [super init])) {
UIImage *image = [UIImage imageNamed:#"comment.png"];
UIImageView *imageView = [[UIImageView alloc] initWithImage:image];
[self addSubview:imageView];
CGRect rect = CGRectMake(20, 20, 200.0f, 90.0f);
UILabel *label = [[UILabel alloc] initWithFrame:rect];
label.text = text;
label.numberOfLines = 3;
label.adjustsFontSizeToFitWidth = YES;
label.textAlignment = UITextAlignmentCenter;
label.backgroundColor = [UIColor clearColor];
[self addSubview:label];
}
return self;
}
- (void)dealloc {
[super dealloc];
}
#end
Thoughts anyone?
Cheers!
What's wrong with monitoring currentPlaybackTime at regular intervals (assuming you are using an instance that implements MPMediaPlayback for playback).

Resources