AdWhirl - moving view to centre screen on rotation - rotation

This is probably something quite basic. I am trying to implement AdWhirl into my app, which I have done successfully for the technical part. When I load my app, the add loads and then slides down from the top to sit at the bottom of the screen. However, when I rotate the device, the advert uses "precise" locations and moves off screen. When the advert reloads (refreshes every 15 seconds) the advert moves up to the bottom of the screen of the landscape window. Again, when rotating back from landscape, the Advert Aligns it's self in the middle of the page vertically (covering content) until a new advert loads. I have attached a number of photos, in a series showing what happens, all in order and taken at least 10 seconds apart (showing test advert of "Hello").
My code from the Implementation file is included at the end of this post - sorry for not using the code format, just didn't want to put spaces in front of the whole block, and I think it's all relatively relevant. It's also available at the paste bin: http://pastebin.com/mzavbj2L
Sam
Sorry - it wouldn't let me upload images. Please send me a PM for images.

I recommend handling the rotation in the willRotateToInterfaceOrientation:duration: method or the didRotateFromInterfaceOrientation:method. You will be able to determine what your new orientation is, the new size of your view, and then change the frame of your AdWhirl view to the new location.
After looking a bit closer, however, it looks like you might need to make *adView a variable declared in your .h file so you can access it from the rotation methods.
Once you do that, you can set your new frame as you did in the viewDidLoad: method:
CGSize adSize = [adView actualAdSize];
CGRect newFrame = adView.frame;
newFrame.size = adSize;
newFrame.origin.x = (self.view.bounds.size.width - adSize.width)/ 2;
newFrame.origin.y = (self.view.bounds.size.height - adSize.height);
adView.frame = newFrame;
[UIView commitAnimations];
Ideally, you would move this code into its own method so you can just call it from wherever you want in your view controller code (e.g. viewDidLoad and the rotation function(s)).

Thanks for your help - it was close to a solution (only just got it working tonight - had mostly forgotten about doing it!). After I made you changes to my .h, I was trying to call [adWhirlView adWhirlDidReceiveAd:(AdWhirlView *)adView]. This kept returning errors, even though it was defined in the AdWhirlView class. As a fix, I added -(void) adWhirlDidReceiveAd:(AdWhirlView *)adView and then called each time the frame rotated [self adWhirlDidReceiveAd:(AdWhirlView *)adView];.
Thanks again - so glad it's finally working.
Sam

Related

SCNView overlay causes tearing on resize

I'm using SceneKit to display a 3D scene (so far, a single quad), and the overlaySKScene to display a 2D overlay (which so far is just a SKNode with no geometry, though I had previously used a single SKLabelNode). It's a pretty simple view inside a bunch of nested NSSplitView. And during normal use, it works brilliantly. The problem comes when I try to resize the window or split view — I get areas of red leaking through my nice background, which disappear shortly after.
I'm running this on a 2016 MBP with a Radeon Pro 460, and captured this frame using Quicktime's screen capture:
Disabling the overlay removes the red areas, which makes me think that it's the problem. Disabling the statistics bar or the scroller (a child view of the SCNView) do not have any impact. My most minimal SKScene subclass is defined as
#implementation TestOverlay
- (instancetype) initWithSize: (CGSize) size
{
if( self = [super initWithSize: size] )
{
// Setup the default state
self.scaleMode = SKSceneScaleModeResizeFill;
self.userInteractionEnabled = NO;
self.backgroundColor = [NSColor blackColor];
}
return self;
}
#end
Has anybody run into similar issues before? Annoyingly, the apple sample Fox2 doesn't have similar problems...
For true enlightenment, one needs to read the documentation carefully, then comment everything out and restore functionality one step at a time. And then read the documentation again.
In the discussion section of -[SCNSceneRendererDelegate renderer:willRenderScene:atTime:], the solution is obvious (emphasis mine):
You should only execute Metal or OpenGL drawing commands (and any setup required to perform them) in this method—the results of modifying SceneKit objects during this method are undefined.
Which is exactly what I was doing. I had misread this as modifying geometry, so thought that assigning textures would be reasonable to do here (after all, "will" render means it hadn't started rendering yet, right?), and would therefore pick the most recently created texture. And unfortunately, before I decided that I needed an overlay, this actually works perfectly well! As soon as the overlay was added, however, the tearing appeared.
The correct place to update material properties seems to be -[SCNSceneRendererDelegate renderer:updateAtTime]. Use that to avoid silly bugs like this one folks!
Try to reset SMC (System Management Controller). It helped me for solving a similar problem but with Autodesk Maya 2018 on MBP 2017 (Radeon 560).
So, shut down and unplug your MBP.
On the built-in keyboard, press and hold the Shift-Option-Control keys on the left side and press the Power Button and hold all of these down for 10 seconds, then release the keys.
Connect the power adapter and then turn the Mac on normally.
Hope this helps.
P.S. In case it doesn't help, try to check/uncheck Automatic graphics switching option in System Preferences–Energy Saver to see if there's a difference.

UIImageView inside UIScrollView bleeding over next page

I have a UIScrollView and a UIPageControl with 5 pages. Each page is set at a different background color. Each page is 1024 x 768 and only landscape mode is supported and the content size is set to: scrollView.contentSize = CGSizeMake(1024 * 5, 768);
Each page shows the right background color with the right size.
I place a UIImageView on each page which I can move around. From page 2 onward (or index 1), if the image is moved to the left age and onward it bleeds over to the previous page, you can actually go to the previous page and see part of the image there.
The same however does not happen for the right edge. If the image is moved past the viewable area, you don't see the image on the next page.
My question is how is this possible and what can I do to prevent it?
Thank you in advance
After much playing with this, to help others that may run into the same situation, the answer to this problem is to set:
self.view.clipsToBounds = TRUE;
on the view where your UIImageView(s) are added as a subview. For me, each page was an instantiation of a UIViewController class with the view set to the bounds of the window.

xcode drawrect - UIView bounds not reflecting rotation in simulator

I'm using xcode 4.5.2 to learn to develop in iOS6. I have this code in my drawRect...
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGPoint midPoint;
midPoint.x = self.bounds.origin.x + self.bounds.size.width/2;
midPoint.y = self.bounds.origin.y + self.bounds.size.height/2;
... etc
For some reason when I run this in iphone simulator, the result is always
midpoint.x = 160
midpoint.y = 252
regardless of whether the simulator (iPhone retina screen) is in portrait mode or rotated to landscape. The result is thus the graphic that I draw in portrait mode is centred on the screen correctly, but offset to the left in landscape.
Can someone suggest where do I begin to look as to why this is the case?
This drawRect code came directly from an earlier app I wrote which functioned correctly in terms of determining this midpoint of the screen (a UIView spanning the whole screen). This problem arises when I imported this code (the whole class) into my currently program which is segue-ing into instances of these UIViews.
Thanks.
figured it out, I think...
it appears that for unknown reasons (to me anyway), when the phone gets rotated, the view inside the phone did not get resized to fill the new vertical & horizontal dimensions. Thus the midpoint coordinates are always the same - because the view basically did not resize. In the size inspector, the view's got struts on the left & right only and no springs.
By adding both vertical & horizontal springs, the view seems to resize with rotation and the midpoint coordinate seem to change accordingly.
While I've gotten by my hurdle, I'm still unclear why I needed to add springs to the view, since the demo that I was following did not seem to require this step. Any suggestions would be appreciated. Thanks.

Alpha Detection in Layer OK on Simulator, not iPhone

First, check out this very handy extension to CALayer from elsewhere on SO. It helps you determine if a point in a layer's contents-assigned CGImageRef is or isn't transparent.
n.b.: There is no guarantee about a layer's contents being representable or responding as if it was a CGImageRef. (This can have implications for broader use of the extension referenced above, granted.) In my case, however, I know that the layers I'm testing have contents that were assigned a CGImageRef. (Hopefully this can't change out from under me after assignment! Plus I notice that contents is retained.)
OK, back to the problem at hand. Here's how I'm using the extension. For starters, I've changed the selector from containsPoint: to containsNonTransparentPoint: (I need to keep the original method around.)
Now, I have a UIImageView subclass that uses seven CALayer objects. These are used for opacity-based animations (pulsing/glowing effects and on/off states). Each of those seven layers has a known CGImageRef in its contents that effectively "covers" (air quotes) one part of the entire view with its own swath of color. The rest of each image in its respective layer is transparent.
In the subclass, I register for single tap gestures. When one arrives, I walk through my layers to see which one was effectively tapped (that is, which one has a non-transparent point where I tapped, first one found wins) and then I can do whatever needs doing.
Here's how I handle the gesture:
- (IBAction)handleSingleTap:(UIGestureRecognizer *)sender {
CGPoint tapPoint = [sender locationInView:sender.view];
// Flip y so 0,0 is at lower left. (Required by layer method below.)
tapPoint.y = sender.view.bounds.size.height - tapPoint.y;
// Figure out which layer was effectively tapped. First match wins.
for (CALayer *layer in myLayers) {
if ([layer containsNonTransparentPoint:tapPoint]) {
NSLog(#"%# tapped at (%.0f, %.0f)", layer.name, tapPoint.x, tapPoint.y);
// We got our layer! Do something useful with it.
return;
}
}
}
The good news? All of this works beautifully on the iPhone Simulator with iOS 4.3.2. (FWIW, I'm on Lion running Xcode 4.1.)
However, on my iPhone 4 (with iOS 4.3.3), it doesn't even come close! None of my taps seem to match up with any of the layers I'd expect them to.
Even if I try the suggestion to use CGContextSetBlendMode when drawing into the 1x1 pixel context, no dice.
I am hoping it's pilot error, but I have yet to figure out what the disparity is. The taps do have a pattern but not a discernible one.
Perhaps there's a data boundary issue. Perhaps I have to do something other than flip the y coordinate to the lower-left of the image. Just not sure yet.
If anyone can please shed some light on what might be amiss, I would be most appreciative!
UPDATE, 22 September 2011: First ah-ha moment acquired! The problem isn't Simulator-vs-iPhone.
It's Retina vs. Non-Retina! The same symptoms occur in the Simulator
when using the Retina version. Perhaps the solution centers around scaling (CTM?) in some way/shape/form. The Quartz 2D Programming Guide also advises that "iOS applications should use UIGraphicsBeginImageContextWithOptions." I feel like I'm very close to the solution here!
OK! First, the problem wasn't Simulator-vs-iPhone. Rather, it was Retina vs. Non-Retina. The same symptoms occur in the Simulator when using the Retina version. Right away, one starts to think the solution has to do with scaling.
A very helpful post over on the Apple Dev Quartz 2D forum (along similar "be mindful of scaling" lines) steered me toward a solution. Now, I'm the first to admit, this solution is NOT pretty, but it does work for Retina and Non-Retina cases.
With that, here's the revised code for the aforementioned CALayer extension:
//
// Checks image at a point (and at a particular scale factor) for transparency.
// Point must be with origin at lower-left.
//
BOOL ImagePointIsTransparent(CGImageRef image, CGFloat scale, CGPoint point) {
unsigned char pixel[1] = {0};
CGContextRef context = CGBitmapContextCreate(pixel, 1, 1, 8, 1,
NULL, kCGImageAlphaOnly);
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextDrawImage(context, CGRectMake(-point.x, -point.y,
CGImageGetWidth(image)/scale, CGImageGetHeight(image)/scale), image);
CGContextRelease(context);
CGFloat alpha = pixel[0]/255.0;
return (alpha < 0.01);
}
#implementation CALayer (Extensions)
- (BOOL)containsNonTransparentPoint:(CGPoint)point scale:(CGFloat)scale {
if (CGRectContainsPoint(self.bounds, point)) {
if (!ImagePointIsTransparent((CGImageRef)self.contents, scale, point))
return YES;
}
return NO;
}
#end
In short, we need to know about the scale. If we divide the image width and height by that scale, ta-dah, the hit test now works on Retina and Non-Retina devices!
What I don't like about this is the mess I've had to make of that poor selector, now called containsNonTransparentPoint:Scale:. As mentioned in the question, there is never any guarantee what a layer's contents will contain. In my case I am taking care to only use this on layers with a CGImageRef in there, but this won't fly in a more general/reusable case.
All this makes me wonder if CALayer is not the best place for this particular extension after all, at least in this new incarnation. Perhaps CGImage, with some layer smarts thrown in, would be cleaner. Imagine doing a hit test on a CGImage but returning the name of the first layer that had non-transparent content at that point. There's still the problem of not knowing which layers have CGImageRefs in them, so some hinting might be required. (Left as an exercise for yours truly and the reader!)
UPDATE: After some discussion with a developer at Apple, messing with layers in this fashion is in fact ill-advised. Contrary to what I previously learned (incorrectly?), multiple UIImageViews encapsulated within a UIView are the way to go here. (I always remember learning that you want to keep your views to a minimum. Perhaps in this case it isn't as big a deal.) Nevertheless, I'll keep this answer here for now, but will not mark it as correct. Once I try out and verify the other technique, I will share that here!

Cocoa: Remove NSView

I'm doing an exercise in learning and at the same time making a game for my kid. He has one of those card games (like Pokemon) and we scanned a bunch in and are attempting to make a game where he can play against the "computer". So the game process is you need to start by selecting your cards. What I've done is have a class (Card (sub of NSView) that gets instantiated by an IBOutlet (button) and drops the first card on the screen as well as scroll buttons - each time a scroll button is clicked it determines what the next card should be and then calls a method (makeCard) which also instantiates a new Card.
I'm fuzzy about what cocoa is doing here. Card basically has, in its drawRect a call to a texture atlas and I pass in the coordinates of the current card to display. That means that each time I instantiate Card a new NSView is being made, correct? I am essentially building a stack of NSViews in my app (since the x, y, w h) of each card is the same I can't tell but that seems like logically what is happening. It doesn't have an effect on app speed but it seems like unnecessary clutter.
Is there a way that I can just update the image in one instance of a view rather than instantiating a Card for each one I want to show? And regardless of that answer, how do I then remove the view from the window once the set up process is complete? [view removeFromSuperview]?
To be clear, I do not want a visual representation of the card anymore. There just the eye candy for the set up part of the game as all the card data (including texture atlas coordinates) are stored in a dictionary.
Also, since I am asking questions here, how would I, without an NSImage, be able to scale the images from the texture atlas. They are 180x250px each but down the road they'll be represented in a holding area and I'd rather they can't be that size.
An answer to part of your question, since I can't figure out the rest of it:
CGImageCreateWithImageInRect will let you create a reference to a part of a larger image, as in your texture atlas. You can then create a NSImage from that (if you're using 10.6 or later, with -[NSImage initWIthCGImage:size:], otherwise you'll need to create a NSBitmapImageRep first). Then you can display the NSImage in a NSTableView cell, NSCollectionView or NSImageView.

Resources